I'm on my trial to test elasticcloud. But now I got problem to create pipeline from logstash to elasticcloud. Here is my logstash.conf output
output {
stdout{codec=>rubydebug}
elasticsearch
{
hosts=>["https://<clusterid>.asia-southeast1.gcp.cloud.es.io:9243"]
index=>"testindex"
user=>elasticdeploymentcredentials
password=>elasticdeploymentcredentials
}
}
But it always returning error as:
[WARN ] 2021-03-29 12:24:50.148 [Ruby-0-Thread-9: :1] elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"https://elastic:xxxxxx#<clusterid>.asia-southeast1.gcp.cloud.es.io:9243/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [https://elastic:xxxxxx#<clusterid>.asia-southeast1.gcp.cloud.es.io:9243/][Manticore::ResolutionFailure] <clusterid>.asia-southeast1.gcp.cloud.es.io"}
[WARN ] 2021-03-29 12:24:55.158 [Ruby-0-Thread-9: :1] elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"https://elastic:xxxxxx#<clusterid>.asia-southeast1.gcp.cloud.es.io:9243/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [https://elastic:xxxxxx#<clusterid>.asia-southeast1.gcp.cloud.es.io:9243/][Manticore::ResolutionFailure] <clusterid>.asia-southeast1.gcp.cloud.es.io: Name or service not known"}
[WARN ] 2021-03-29 12:25:00.163 [Ruby-0-Thread-9: :1] elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"https://elastic:xxxxxx#<clusterid>.asia-southeast1.gcp.cloud.es.io:9243/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [https://elastic:xxxxxx#<clusterid>.asia-southeast1.gcp.cloud.es.io:9243/][Manticore::ResolutionFailure] <clusterid>.asia-southeast1.gcp.cloud.es.io"}
[WARN ] 2021-03-29 12:25:05.170 [Ruby-0-Thread-9: :1] elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"https://elastic:xxxxxx#<clusterid>.asia-southeast1.gcp.cloud.es.io:9243/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [https://elastic:xxxxxx#<clusterid>.asia-southeast1.gcp.cloud.es.io:9243/][Manticore::ResolutionFailure] <clusterid>.asia-southeast1.gcp.cloud.es.io: Name or service not known"}
[WARN ] 2021-03-29 12:25:10.175 [Ruby-0-Thread-9: :1] elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"https://elastic:xxxxxx#<clusterid>.asia-southeast1.gcp.cloud.es.io:9243/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [https://elastic:xxxxxx#<clusterid>.asia-southeast1.gcp.cloud.es.io:9243/][Manticore::ResolutionFailure] <clusterid>.asia-southeast1.gcp.cloud.es.io"}
It is possible for me to curl it with my credential as :
[root#localhost testconfig]# curl https://elasticdeploymentcredentials:elasticdeploymentcredentials#<clusterid>.asia-southeast1.gcp.elastic-cloud.com:9243
it returning
"name" : "name",
"cluster_name" : "<clusterid>",
"cluster_uuid" : "<clusteruuid>",
"version" : {
"number" : "7.12.0",
"build_flavor" : "default",
"build_type" : "docker",
"build_hash" : "build_hash",
"build_date" : "2021-03-18T06:17:15.410153305Z",
"build_snapshot" : false,
"lucene_version" : "8.8.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
am I missing something?
Instead of trying to connect to Elastic Cloud via the username/password from the deployment, try to use the Cloud_ID/Cloud_Auth combination:
output {
elasticsearch {
hosts => ["https://<clusterid>.asia-southeast1.gcp.cloud.es.io:9243"]
index => "%{[#metadata][beat]}-%{[#metadata][version]}-%{+YYYY.MM.dd}"
cloud_id => "your cloudid from the console"
cloud_auth => "elastic:password"
}
}
The cloud_auth parameter is where you are actually going to use the username/password from the deployment. More information here:
https://www.elastic.co/guide/en/logstash/7.12/connecting-to-cloud.html
Related
I'm trying to get OpenSearch configured on my local machine, and am deploying it through docker-compose using the following configuration:
opensearch:
image: opensearchproject/opensearch:1.0.0
restart: unless-stopped
ports:
- "9200:9200"
- "9300:9300"
environment:
discovery.type: single-node
The instance starts successfully, however when trying to access it through the web interface, it only accepts HTTPS connections with the default basic auth credentials (admin:admin). i.e.
https://localhost:9200 asks me to enter administrator credentials, and upon doing so, returns an expected response:
{
"name" : "a39dcf825899",
"cluster_name" : "docker-cluster",
"cluster_uuid" : "d2ZBZDQRTyG6SvYlCmX3Iw",
"version" : {
"distribution" : "opensearch",
"number" : "1.0.0",
"build_type" : "tar",
"build_hash" : "34550c5b17124ddc59458ef774f6b43a086522e3",
"build_date" : "2021-07-02T23:22:21.383695Z",
"build_snapshot" : false,
"lucene_version" : "8.8.2",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "The OpenSearch Project: https://opensearch.org/"
}
However when attempting to connect to the instance over HTTP, I get an empty response:
On chrome:
Using the OpenSearch Python client on a Django instance running in a separate Docker container (part of the same docker-compose.yml):
opensearchpy.exceptions.ConnectionError: ConnectionError(('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))) caused by: ProtocolError(('Connection aborted.', RemoteDisconnected('Remote end closed connection without response')))
For reference, the code I am using to connect the OpenSearch Python client to the OpenSearch instance is:
cls._os_client = OpenSearch(
[{"host": 'opensearch', "port": '9200'}],
use_ssl=False,
verify_certs=False,
ssl_assert_hostname=False,
ssl_show_warn=False
)
How can I configure OpenSearch to allow insecure HTTP connections?
You can disable security, just add DISABLE_SECURITY_PLUGIN=true to your env.
I have installed Logstash,elasticsearch and kibana in single instance and installed X-pack also for TLS communication. enabled ssl communication in elasticsearch and kibana working good but logstash unable to connect elasticsearch , but i can curl elasticsearch url https://localhost:9200 there is no firewall blocking also,
I have generated open ssl certificate and key file and kept in elasticsearch
input {
beats {
client_inactivity_timeout => 1000
port => 5044
}
}
filter {
grok {
match => [ "message", "%{TIMESTAMP_ISO8601} %{LOGLEVEL:loglevel} zeppelin IDExtractionService transactionId %{WORD:transaction_id} operation %{WORD:otype} received request duration %{NUMBER:duration} exception %{WORD:error}" ]
}
}
filter {
if "beats_input_codec_plain_applied" in [tags] {
mutate {
remove_tag => ["beats_input_codec_plain_applied"]
}
}
}
filter {
if "_grokparsefailure" in [tags] {
mutate {
remove_tag => ["_grokparsefailure"]
}
}
}
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.url: https://localhost:9200
output {
elasticsearch {
hosts => ["http://localhost:9200"]
user => elastic
password => password
manage_template => false
# ssl_certificate_verification => false
ssl => true
cacert => '/etc/elasticsearch/ca/key.pem'
index => "%{[#metadata][beat]}-%{+YYYY.MM.dd}"
}
}
elastic search config file
cluster.name: my-application
network.host: 0.0.0.0
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.key: /opt/elasticsearch/ca/ca.key
xpack.security.http.ssl.certificate: /opt/elasticsearch/ca/ca.crt
logstash log files
[2018-05-16T05:28:16,421][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://logstash_system:xxxxxx#localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://logstash_system:xxxxxx#localhost:9200/][Manticore::ClientProtocolException] localhost:9200 failed to respond"}
[2018-05-16T05:28:17,201][WARN ][logstash.shutdownwatcher ] {"inflight_count"=>1, "stalling_thread_info"=>{"other"=>[{"thread_id"=>24, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}]}}
[2018-05-16T05:28:21,422][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://logstash_system:xxxxxx#localhost:9200/, :path=>"/"}
[2018-05-16T05:28:21,422][INFO ][logstash.licensechecker.licensereader] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://logstash_system:xxxxxx#localhost:9200/, :path=>"/"}
[2018-05-16T05:28:21,424][WARN ][logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://logstash_system:xxxxxx#localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://logstash_system:xxxxxx#localhost:9200/][Manticore::ClientProtocolException] localhost:9200 failed to respond"}
[2018-05-16T05:28:21,425][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://logstash_system:xxxxxx#localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://logstash_system:xxxxxx#localhost:9200/][Manticore::ClientProtocolException] localhost:9200 failed to respond"}
[2018-05-16T05:28:22,202][WARN ][logstash.shutdownwatcher ] {"inflight_count"=>1, "stalling_thread_info"=>{"other"=>[{"thread_id"=>24, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}]}}
[2018-05-16T05:28:26,425][INFO ][logstash.licensechecker.licensereader] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://logstash_system:xxxxxx#localhost:9200/, :path=>"/"}
[2018-05-16T05:28:26,426][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://logstash_system:xxxxxx#localhost:9200/, :path=>"/"}
[2018-05-16T05:28:26,427][WARN ][logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://logstash_system:xxxxxx#localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://logstash_system:xxxxxx#localhost:9200/][Manticore::ClientProtocolException] localhost:9200 failed to respond"}
[2018-05-16T05:28:26,427][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://logstash_system:xxxxxx#localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://logstash_system:xxxxxx#localhost:9200/][Manticore::ClientProtocolException] localhost:9200 failed to respond"}
[2018-05-16T05:28:27,201][WARN ][logstash.shutdownwatcher ] {"inflight_count"=>1, "stalling_thread_info"=>{"other"=>[{"thread_id"=>24, "name"=>nil, "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/interval.rb:89:in `sleep'"}]}}
root#5c417caecc5f:/var/log/logstash#
You have to enable monitoring for elasticsearch in the logstash.yml configuration file.
/etc/logstash/logstash.yml
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.url: http://X.X.X.X:9200
See this post for more information :
https://discuss.elastic.co/t/elasticsearch-unreachable-error-in-logstash/75157/7
And the documentation (maybe needed for TLS/SSL monitoring settings) :
https://www.elastic.co/guide/en/logstash/6.2/configuring-logstash.html#monitoring-settings
xpack.monitoring.elasticsearch.ssl.ca
xpack.monitoring.elasticsearch.ssl.truststore.path
xpack.monitoring.elasticsearch.ssl.truststore.password
xpack.monitoring.elasticsearch.ssl.keystore.path
xpack.monitoring.elasticsearch.ssl.keystore.password
If this doesn't work, can I see your /etc/logstash/logstash.yml configuration file ?
I have 2 servers, one RabbitMQ, and one ELK server.
Both are independently running as they should. My ELK instance receives input messages from several sources across my network, but both servers are in the same internal network.
I am trying to make LogStash read from RabbitMQ, to get any log messages i pushed into there.
Here's my logstash conf.d file:
input{
rabbitmq {
host => "1.66.66.66"
queue => "logs"
durable => true
exchange => "event.log"
threads => 1
prefetch_count => 50
port => 5672
user => "elk"
password => "*******"
}
}
filter{
}
output {
elasticsearch {
hosts => ["1.66.66.1:9200"]
sniffing => false
manage_template => false
index => "%{[#metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[#metadata][type]}"
}
}
After i run a config test ($ sudo service logstash configtest)
if get: ConfigTest OK
So this seems good.
But then when it runs i get the following error in my "/var/log/logstash/logstash.log" file:
{:timestamp=>"2017-06-15T10:28:18.727000-0400", :message=>"Connection refused (Connection refused)", :class=>"Manticore::SocketException", :backtrace=>["/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.6.0-java/lib/manticore/response.rb:37:in `initialize'", "org/jruby/RubyProc.java:281:in `call'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.6.0-java/lib/manticore/response.rb:79:in `call'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.6.0-java/lib/manticore/response.rb:256:in `call_once'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.6.0-java/lib/manticore/response.rb:153:in `code'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.18/lib/elasticsearch/transport/transport/http/manticore.rb:84:in `perform_request'", "org/jruby/RubyProc.java:281:in `call'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.18/lib/elasticsearch/transport/transport/base.rb:257:in `perform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.18/lib/elasticsearch/transport/transport/http/manticore.rb:67:in `perform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.18/lib/elasticsearch/transport/transport/sniffer.rb:32:in `hosts'", "org/jruby/ext/timeout/Timeout.java:147:in `timeout'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.18/lib/elasticsearch/transport/transport/sniffer.rb:31:in `hosts'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.18/lib/elasticsearch/transport/transport/base.rb:79:in `reload_connections!'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.1-java/lib/logstash/outputs/elasticsearch/http_client.rb:72:in `sniff!'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.1-java/lib/logstash/outputs/elasticsearch/http_client.rb:60:in `start_sniffing!'", "org/jruby/ext/thread/Mutex.java:149:in `synchronize'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.1-java/lib/logstash/outputs/elasticsearch/http_client.rb:60:in `start_sniffing!'", "org/jruby/RubyKernel.java:1479:in `loop'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.1-java/lib/logstash/outputs/elasticsearch/http_client.rb:59:in `start_sniffing!'"], :level=>:error}
If, from my ELK server do:
curl -X GET http://1.66.66.1:9200
I get the proper response:
{
"name" : "Hera",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "-NmCwDk_RA2qzTwWlNNizQ",
"version" : {
"number" : "2.4.5",
"build_hash" : "c849dd13904f53e63e88efc33b2ceeda0b6a1276",
"build_timestamp" : "2017-04-24T16:18:17Z",
"build_snapshot" : false,
"lucene_version" : "5.5.4"
},
"tagline" : "You Know, for Search"
}
If you know anything i can try, that would be much appreciated, I am running Ubuntu 16.04 on both servers. Thanks!
I have Elasticsearch properly configured on my server. I can do everything from the command line using cURL. I can even connect to it using cURL from a PHP script outside Yii. However, I can't seem to get it to work from within Yii 2.0.
In my config, I have:
'elasticsearch' => [
'class' => 'yii\elasticsearch\Connection',
'nodes' => [
['http_address' => 'localhost:9200'],
// configure more hosts if you have a cluster
],
],
But when I try to do a simple query in Yii, I get this error. Note how it's using my server ip address rather than 'localhost' or '172.0.0.1'. Note: I've hashed out my ip address for sercurity.
Elasticsearch Database Exception – yii\elasticsearch\Exception
Elasticsearch request failed: 7 - Failed to connect to ##.##.##.### port 9200: Connection refused
Error Info: Array
(
[requestMethod] => GET
[requestUrl] => http://##.##.##.###:9200/profiles/profile/_search
[requestBody] => {"size":100,"query":{"match_all":{}}}
[responseHeaders] => Array
(
)
[responseBody] =>
)
I was able to fix this error by updating the version of Elasticsearch to something > 1.3.0 as this is the minimum requirement for YIISOFT/YII2-ELASTICSEARCH
run curl -X GET 'http://127.0.0.1:9200' to check what version you are running.
First follow this steps to download elastic search.
wget https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-1.5.2.tar.gz
mkdir es
tar -xf elasticsearch-1.5.2.tar.gz -C es
cd es
./bin/elasticsearch
Then you must be able to access to localhost:9200 and get something like this below :
{
"name" : "Sigyn",
"cluster_name" : "elasticsearch",
"version" : {
"number" : "2.4.0",
"build_hash" : "ce9f0c7394dee074091dd1bc4e9469251181fc55",
"build_timestamp" : "2016-08-29T09:14:17Z",
"build_snapshot" : false,
"lucene_version" : "5.5.2"
},
"tagline" : "You Know, for Search"
}
Then secondly,follow instruction in https://github.com/yiisoft/yii2-elasticsearch. Then you are done
i have installed logstash+elasticsearch+kibana into one host and received the error from the title. I have googled all over the related topics, still no luck and yet stuck.
I will share the configs i have made:
elasticsearch.yml
cluster.name: hive
node.name: "logstash-central"
network.bind_host: 10.1.1.25
output from /var/log/elasticsearch/hive.log
[2015-01-13 15:18:06,562][INFO ][node ] [logstash-central] initializing ...
[2015-01-13 15:18:06,566][INFO ][plugins ] [logstash-central] loaded [], sites []
[2015-01-13 15:18:09,275][INFO ][node ] [logstash-central] initialized
[2015-01-13 15:18:09,275][INFO ][node ] [logstash-central] starting ...
[2015-01-13 15:18:09,385][INFO ][transport ] [logstash-central] bound_address {inet[/10.1.1.25:9300]}, publish_address {inet[/10.1.1.25:9300]}
[2015-01-13 15:18:09,401][INFO ][discovery ] [logstash-central] hive/T2LZruEtRsGPAF_Cx3BI1A
[2015-01-13 15:18:13,173][INFO ][cluster.service ] [logstash-central] new_master [logstash-central][T2LZruEtRsGPAF_Cx3BI1A][logstash.tw.intra][inet[/10.1.1.25:9300]], reason: zen-disco-join (elected_as_master)
[2015-01-13 15:18:13,193][INFO ][http ] [logstash-central] bound_address {inet[/10.1.1.25:9200]}, publish_address {inet[/10.1.1.25:9200]}
[2015-01-13 15:18:13,194][INFO ][node ] [logstash-central] started
[2015-01-13 15:18:13,209][INFO ][gateway ] [logstash-central] recovered [0] indices into cluster_state
accessing logstash.example.com:9200 gives the ordinary output like in ES guide:
{
"status" : 200,
"name" : "logstash-central",
"cluster_name" : "hive",
"version" : {
"number" : "1.4.2",
"build_hash" : "927caff6f05403e936c20bf4529f144f0c89fd8c",
"build_timestamp" : "2014-12-16T14:11:12Z",
"build_snapshot" : false,
"lucene_version" : "4.10.2"
},
"tagline" : "You Know, for Search"
}
accessing http://logstash.example.com:9200/_status? gives the following:
{"_shards":{"total":0,"successful":0,"failed":0},"indices":{}}
Kibanas config.js is default:
elasticsearch: "http://"+window.location.hostname+":9200"
Kibana is used via nginx. Here is /etc/nginx/conf.d/nginx.conf:
server {
listen *:80 ;
server_name logstash.example.com;
location / {
root /usr/share/kibana3;
Logstash config file is /etc/logstash/conf.d/central.conf:
input {
redis {
host => "10.1.1.25"
type => "redis-input"
data_type => "list"
key => "logstash"
}
output {
stdout{ { codec => rubydebug } }
elasticsearch {
host => "logstash.example.com"
}
}
Redis is working and the traffic passes between the master and slave (i've checked it via tcpdump).
15:46:06.189814 IP 10.1.1.50.41617 > 10.1.1.25.6379: Flags [P.], seq 89560:90064, ack 1129, win 115, options [nop,nop,TS val 3572086227 ecr 3571242836], length 504
netstat -apnt shows the following:
tcp 0 0 10.1.1.25:6379 10.1.1.50:41617 ESTABLISHED 21112/redis-server
tcp 0 0 10.1.1.25:9300 10.1.1.25:44011 ESTABLISHED 22598/java
tcp 0 0 10.1.1.25:9200 10.1.1.35:51145 ESTABLISHED 22598/java
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 22379/nginx
Could you please tell which way should i investigate the issue?
Thanks in advance
The problem is likely due to the nginx setup and the fact that Kibana, while installed on your server, is running in your browser and trying to access Elasticsearch from there. The typical way this is solved is by setting up a proxy in nginx and then changing your config.js.
You have what appears to be a correct proxy set up for nginx for Kibana but you'll need some additional work to have kibana be able to access Elasticsearch.
Check the comments on this post: http://vichargrave.com/ossec-log-management-with-elasticsearch/
And check this post: https://groups.google.com/forum/#!topic/elasticsearch/7hPvjKpFcmQ
And this sample nginx config: https://github.com/johnhamelink/ansible-kibana/blob/master/templates/nginx.conf.j2
You'll have to precise the protocol for elasticsearch in the output section
elasticsearch {
host => "logstash.example.com"
protocol => 'http'
}