Logstash not producing output although pipeline main starts - elasticsearch

I'm trying to add logs of apache using kibana elasticsearch and logstash. but logstash didn't create index to elastticsearch , so I'm not able to visualize data in kibana
this is my docker-compose :
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch-oss:6.2.4
container_name: elasticsearch
hostname: elasticsearch
environment:
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- ES_JAVA_OPTS=-Xms512m -Xmx512m
- http.cors.enabled=true
- http.cors.allow-origin= "*"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata1:/usr/share/elasticsearch/data
ports:
- 9200:9200
logstash:
image: docker.elastic.co/logstash/logstash-oss:6.2.4
restart: unless-stopped
depends_on:
- elasticsearch
volumes:
- ./logstash-apache.conf:/opt/logstash/logstash-apache.conf
- ./logs:/logs/access_log
links:
- elasticsearch
command: logstash -f /opt/logstash/logstash-apache.conf
kibana:
image: docker.elastic.co/kibana/kibana-oss:6.2.4
container_name: kibana
volumes:
- esdata2:/usr/share/kibana/config/data
ports:
- 5601:5601
depends_on:
- elasticsearch
links:
- elasticsearch
volumes:
esdata1:
driver: local
esdata2:
driver: local
this my Logstash-apache.conf
input {
file {
type => "apache_access"
path => "/var/log/httpd/access_log"
start_position => beginning
}
}
filter {
if [type] == "apache_access" {
grok {
match => { "message" => "%{COMBINEDAPACHELOG}( \*\*%{POSINT:responsetime}\*\*)?" }
}
date {
match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
}
}
}
output {
elasticsearch {
hosts => ["elasticsearch:9200"]
index => "apache_logstash-%{+YYYY.MM.dd}"
}
}
This the output mesg:
logstash_1| [2018-07-18T23:55:25,926][INFO ][logstash.agent] Pipelines running {:count=>1, :pipelines=>["main"]}
I have no error in my output but the problem that logstash not producing data
what I shall do ? could any one help me please ?

Related

Could not connect Logstash to Kafka via compose file

I'm using compose file to create data pipeline between Logstash and Kafka. But this message shows up in logstash container. Could someone help me out?
The message:
[WARN ][org.apache.kafka.clients.NetworkClient] [Consumer clientId=logstash-0, groupId=logstash] Connection to node 2 could not be established. Broker may not be available.
My compose file:
version: "3"
services:
zookeeper:
image: confluentinc/cp-zookeeper:6.2.0
container_name: zookeeper
ports:
- "2181:2181"
networks:
- kafkanet
environment:
ZOOKEEPER_CLIENT_PORT: "2181"
ZOOKEEPER_TICK_TIME: "2000"
ZOOKEEPER_SYNC_LIMIT: "2"
kafkaserver:
image: confluentinc/cp-kafka:6.2.0
container_name: kafka
ports:
- "9092:9092"
networks:
- kafkanet
environment:
KAFKA_ZOOKEEPER_CONNECT: "zookeeper:2181"
KAFKA_ADVERTISED_LISTENERS: "PLAINTEXT://localhost:9092"
KAFKA_BROKER_ID: "2"
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: "1"
depends_on:
- zookeeper
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.4.0
container_name: elasticsearch
ports:
- 9200:9200
- 9300:9300
networks:
- kafkanet
kibana:
image: docker.elastic.co/kibana/kibana:6.4.0
container_name: kibana
ports:
- 5601:5601
networks:
- kafkanet
depends_on: [ 'elasticsearch' ]
# Logstash Docker Image
logstash:
image: docker.elastic.co/logstash/logstash:6.4.0
container_name: logstash
networks:
- kafkanet
depends_on: [ 'elasticsearch', 'kafkaserver' ]
volumes:
- './logstash/config:/usr/share/logstash/pipeline/'
networks:
kafkanet:
driver: bridge
./logstash/config/logstash.conf
input {
kafka {
bootstrap_servers => "kafkaserver:9092"
topics => ["sit.catalogue.item","uat.catalogue.item"]
auto_offset_reset => "earliest"
decorate_events => true
}
}
output {
elasticsearch {
hosts => ["elasticsearch:9200"]
index => "%{[indexPrefix]}-logs-%{+YYYY.MM.dd}"
}
}
Your advertised listener in Kafka is not right. It should be kafkaserver.
So instead of
KAFKA_ADVERTISED_LISTENERS: "PLAINTEXT://localhost:9092"
You need
KAFKA_ADVERTISED_LISTENERS: "PLAINTEXT://kafkaserver:9092"
For more details, see this blog that I wrote.
BTW if you're pushing data from Kafka to Elasticsearch you should check out Kafka Connect as another option.

Laravel Dusk is not working on Docker/ docker-compose.yaml

I am working on a Laravel project. I started writing browser tests using Dusk. I am using docker as my development environment. When I run the tests, I am getting the "Connection refused" error.
This is my docker-compose.yaml file.
version: '3'
services:
apache:
container_name: res_apache
image: webdevops/apache:ubuntu-16.04
environment:
WEB_DOCUMENT_ROOT: /var/www/public
WEB_ALIAS_DOMAIN: restaurant.localhost
WEB_PHP_SOCKET: php-fpm:9000
volumes: # Only shared dirs to apache (to be served)
- ./public:/var/www/public:cached
- ./storage:/var/www/storage:cached
networks:
- res-network
ports:
- "8081:80"
- "443:443"
php-fpm:
container_name: res_php
image: jguyomard/laravel-php:7.3
volumes:
- ./:/var/www/
- ./ci:/var/www/ci:cached
- ./vendor:/var/www/vendor:delegated
- ./storage:/var/www/storage:delegated
- ./node_modules:/var/www/node_modules:cached
- ~/.ssh:/root/.ssh:cached
- ./composer.json:/var/www/composer.json
- ./composer.lock:/var/www/composer.lock
- ~/.composer/cache:/root/.composer/cache:delegated
networks:
- res-network
db:
container_name: res_db
image: mariadb:10.2
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: restaurant
MYSQL_USER: restaurant
MYSQL_PASSWORD: secret
volumes:
- res-data:/var/lib/mysql
networks:
- res-network
ports:
- "33060:3306"
chrome:
image: robcherry/docker-chromedriver
networks:
- res-network
environment:
CHROMEDRIVER_WHITELISTED_IPS: ""
CHROMEDRIVER_PORT: "9515"
ports:
- 9515:9515
cap_add:
- "SYS_ADMIN"
networks:
res-network:
driver: "bridge"
volumes:
res-data:
driver: "local"
The following is the driver function in DuskTestCase.php class
/**
* Create the RemoteWebDriver instance.
*
* #return \Facebook\WebDriver\Remote\RemoteWebDriver
*/
protected function driver()
{
$options = (new ChromeOptions)->addArguments([
'--disable-gpu',
'--headless',
'--window-size=1920,1080',
]);
return RemoteWebDriver::create(
'http://localhost:9515', DesiredCapabilities::chrome()->setCapability(
ChromeOptions::CAPABILITY, $options
)
);
}
I run the tests running the following command.
docker-compose exec php-fpm php artisan dusk
Then I get the following error.
Facebook\WebDriver\Exception\WebDriverCurlException: Curl error thrown for http POST to /session with params: {"capabilities":{"firstMatch":[{"browserName":"chrome","goog:chromeOptions":{"args":["--disable-gpu","--headless","--windo
w-size=1920,1080"]}}]},"desiredCapabilities":{"browserName":"chrome","platform":"ANY","chromeOptions":{"args":["--disable-gpu","--headless","--window-size=1920,1080"]}}}
Failed to connect to localhost port 9515: Connection refused
/var/www/vendor/php-webdriver/webdriver/lib/Remote/HttpCommandExecutor.php:331
What is wrong with my configuration and how can I fix it?

Elasticsearch with xpack security fails

I am trying to set up a simple ELK stack using docker. While I disable xpack security it starts fine and I can access the Kibana interface. If xpack security is enabled I get an "Kibana server is not ready yet" error from the Kibana interface. This error is most likely caused by this Elasticsearch error:
{"type": "server", "timestamp": "2020-08-03T15:35:10,134Z", "level": "INFO", "component": "o.e.c.r.a.AllocationService", "cluster.name": "elastic-cluster", "node.name": "elasticsearch", "message": "Cluster health status changed from [RED] to [GREEN] (reason: [shards started [[.monitoring-es-7-2020.08.03][0]]]).", "cluster.uuid": "Vdk1-_4sSvuqlEspQcF-6A", "node.id": "PZMUpi_JSJS6IZ7tv6H22g" }
{"type": "server", "timestamp": "2020-08-03T15:35:10,560Z", "level": "ERROR", "component": "o.e.x.s.a.e.NativeUsersStore", "cluster.name": "elastic-cluster", "node.name": "elasticsearch", "message": "security index is unavailable. short circuiting retrieval of user [elasticadmin]", "cluster.uuid": "Vdk1-_4sSvuqlEspQcF-6A", "node.id": "PZMUpi_JSJS6IZ7tv6H22g" }
This is my elasticsearch.yml:
cluster.name: elastic-cluster
node.name: elasticsearch
network.host: 0.0.0.0
transport.host: 0.0.0.0
## Cluster Settings
discovery.seed_hosts: elasticsearch
cluster.initial_master_nodes: elasticsearch
## License
xpack.license.self_generated.type: basic
# Security
xpack.security.enabled: true
## - ssl
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.key: certs/elasticsearch.key
xpack.security.transport.ssl.certificate: certs/elasticsearch.crt
xpack.security.transport.ssl.certificate_authorities: certs/ca.crt
## - http
#xpack.security.http.ssl.enabled: true
#xpack.security.http.ssl.key: certs/elasticsearch.key
#xpack.security.http.ssl.certificate: certs/elasticsearch.crt
#xpack.security.http.ssl.certificate_authorities: certs/ca.crt
#xpack.security.http.ssl.client_authentication: optional
# Monitoring
xpack.monitoring.enabled: true
xpack.monitoring.collection.enabled: true
This is the error log from Kibana:
{"type":"log","#timestamp":"2020-08-03T15:42:22Z","tags":["warning","plugins","licensing"],"pid":6,"
message":"License information could not be obtained from Elasticsearch due to [security_exception] unable to authenticate user [elasticadmin] for REST request [/_xpack], with { header={ WWW-Authenticate=\"Basic realm=\\\"security\\\" charset=\\\"UTF-8\\\"\" } } :: {\"path\":\"/_xpack\",\"statusCode\":401,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[{\\\"type\\\":\\\"security_exception\\\",\\\"reason\\\":\\\"unable to authenticate user [elasticadmin] for REST request [/_xpack]\\\",\\\"header\\\":{\\\"WWW-Authenticate\\\":\\\"Basic realm=\\\\\\\"security\\\\\\\" charset=\\\\\\\"UTF-8\\\\\\\"\\\"}}],\\\"type\\\":\\\"security_exception\\\",\\\"reason\\\":\\\"unable to authenticate user [elasticadmin] for REST request [/_xpack]\\\",\\\"header\\\":{\\\"WWW-Authenticate\\\":\\\"Basic realm=\\\\\\\"security\\\\\\\" charset=\\\\\\\"UTF-8\\\\\\\"\\\"}},\\\"status\\\":401}\",\"wwwAuthenticateDirective\":\"Basic realm=\\\"security\\\" charset=\\\"UTF-8\\\"\"} error"}
Basic curl request:
curl -H "Authorization: Basic ZWxhc3RpY2FkbWluOjEyMzQ1Njc4OQ==" -XGET "http://localhost:9200/_cat/nodes?v&pretty"
{
"error" : {
"root_cause" : [
{
"type" : "security_exception",
"reason" : "unable to authenticate user [elasticadmin] for REST request [/_cat/nodes?v&pretty]",
"header" : {
"WWW-Authenticate" : "Basic realm=\"security\" charset=\"UTF-8\""
}
}
],
"type" : "security_exception",
"reason" : "unable to authenticate user [elasticadmin] for REST request [/_cat/nodes?v&pretty]",
"header" : {
"WWW-Authenticate" : "Basic realm=\"security\" charset=\"UTF-8\""
}
},
"status" : 401
}
Another Auth request:
docker#docker:~$ curl -H "Authorization: Basic ZWxhc3RpY2FkbWluOjEyMzQ1Njc4OQ" -XGET "http://localhost:9200/_security/_authenticate"
{"error":{"root_cause":[{"type":"security_exception","reason":"unable to authenticate user [elasticadmin] for REST request [/_security/_authenticate]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}}],"type":"security_exception","reason":"unable to authenticate user [elasticadmin] for REST request [/_security/_authenticate]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}},"status":401}
Docker-Compose:
secrets:
elasticsearch.keystore:
file: ${ELK_DATA}/secrets/keystore/elasticsearch.keystore
elastic.ca:
file: ${ELK_DATA}/secrets/certs/ca/ca.crt
elasticsearch.certificate:
file: ${ELK_DATA}/secrets/certs/elasticsearch/elasticsearch.crt
elasticsearch.key:
file: ${ELK_DATA}/secrets/certs/elasticsearch/elasticsearch.key
kibana.certificate:
file: ${ELK_DATA}/secrets/certs/kibana/kibana.crt
kibana.key:
file: ${ELK_DATA}/secrets/certs/kibana/kibana.key
services:
####################################################################
############################# ELK ##################################
####################################################################
elasticsearch:
container_name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:${ELK_VERSION}
restart: unless-stopped
environment:
ELASTIC_USERNAME: ${ELASTIC_USERNAME}
ELASTIC_PASSWORD: ${ELASTIC_PASSWORD}
ELASTIC_CLUSTER_NAME: ${ELASTIC_CLUSTER_NAME}
ELASTIC_NODE_NAME: ${ELASTIC_NODE_NAME}
ELASTIC_INIT_MASTER_NODE: ${ELASTIC_INIT_MASTER_NODE}
ELASTIC_DISCOVERY_SEEDS: ${ELASTIC_DISCOVERY_SEEDS}
ES_JAVA_OPTS: -Xmx${ELASTICSEARCH_HEAP} -Xms${ELASTICSEARCH_HEAP} -Des.enforce.bootstrap.checks=true
bootstrap.memory_lock: "true"
volumes:
- ${ELK_DATA}/elasticsearch/data:/usr/share/elasticsearch/data
- ${ELK_DATA}/elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
- ${ELK_DATA}/elasticsearch/config/log4j2.properties:/usr/share/elasticsearch/config/log4j2.properties
secrets:
- source: elasticsearch.keystore
target: /usr/share/elasticsearch/config/elasticsearch.keystore
- source: elastic.ca
target: /usr/share/elasticsearch/config/certs/ca.crt
- source: elasticsearch.certificate
target: /usr/share/elasticsearch/config/certs/elasticsearch.crt
- source: elasticsearch.key
target: /usr/share/elasticsearch/config/certs/elasticsearch.key
ports:
- 9200:9200
- 9300:9300
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 200000
hard: 200000
networks:
- traefik_proxy
logstash:
container_name: logstash
image: docker.elastic.co/logstash/logstash:${ELK_VERSION}
restart: unless-stopped
volumes:
- ${ELK_DATA}/logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml
- ${ELK_DATA}/logstash/config/pipelines.yml:/usr/share/logstash/config/pipelines.yml
- ${ELK_DATA}/logstash/pipeline:/usr/share/logstash/pipeline
environment:
ELASTIC_USERNAME: ${ELASTIC_USERNAME}
ELASTIC_PASSWORD: ${ELASTIC_PASSWORD}
ELASTICSEARCH_HOST_PORT: ${ELASTICSEARCH_HOST}:${ELASTICSEARCH_PORT}
LS_JAVA_OPTS: "-Xmx${LOGSTASH_HEAP} -Xms${LOGSTASH_HEAP}"
ports:
- 5044:5044
- 9600:9600
networks:
- traefik_proxy
kibana:
container_name: kibana
image: docker.elastic.co/kibana/kibana:${ELK_VERSION}
restart: unless-stopped
volumes:
- ${ELK_DATA}/kibana/config:/usr/share/kibana/config
environment:
ELASTIC_USERNAME: ${ELASTIC_USERNAME}
ELASTIC_PASSWORD: ${ELASTIC_PASSWORD}
ELASTICSEARCH_HOST_PORT: ${ELASTICSEARCH_HOST}:${ELASTICSEARCH_PORT}
secrets:
- source: elastic.ca
target: /certs/ca.crt
- source: kibana.certificate
target: /certs/kibana.crt
- source: kibana.key
target: /certs/kibana.key
ports:
- 5601:5601
networks:
- traefik_proxy
Where should I start looking to find the source of this issue?
Thanks for any help!
when you enable x-pack, elasticsearch is getting started, But it seems your kibana is not getting authenicated.please see below part of your error message which explains this.
elasticadmin user is not authenticated
Please see this user and see you are passing the correction authentication while accessing elasticsearch. You need to pass username and password under basic authentication mechanism.
I have the same issue but I solve it:
1 Step
you can configure you docker compose as
kibana:
build: kibana
container_name: kibana
ports:
- 5601:5601
volumes:
- ./kibana/kibana.yml:/usr/share/kibana/config/kibana.yml
networks:
backend:
aliases:
- "kibana"
2 Step
and my kibana file is that:
...
elasticsearch.username: "kibana"
elasticsearch.password: "mypwd"
...
and my Dockerfile is:
FROM docker.elastic.co/kibana/kibana:7.10.2
COPY kibana.yml /usr/share/kibana/kibana.yml
USER root
RUN chown root:kibana /usr/share/kibana/config/kibana.yml
USER kibana
I got this issue when the data folder of ElasticSearch was deleted and re-initialized from scratch afterwards. The point is that the built-in users were not initialized.
As soon as I initialized the built-in users the error disappeared and the system worked again.
bin/elasticsearch-setup-passwords interactive|auto [-u "https://<host_name>:9200"]

SpringData Elasticsearch NoNodeAvailableException

I am using SpringData to connect my application to Elastic search local instance. When I do a regular curl to get ES info, it works fine, but I am unable to connect to it from Springboot application.
Elasticsearch local version ./elasticsearch -V => Version: 7.6.0
SpringData Elastic search version 3.1.11
> curl -XGET 'http://localhost:9200/_cluster/state?pretty'
{
"cluster_name" : "elasticsearch",
"cluster_uuid" : "1_8HMIK5QDug_xH80VZLgQ",
"version" : 54,
"state_uuid" : "YEe1FSwfRUuw0uw-T69fJQ",
"master_node" : "Nbktx7KrREetbyfL7v0Fog",
"blocks" : { },
"nodes" : {
"Nbktx7KrREetbyfL7v0Fog" : {
"name" : "k***-macOS",
"ephemeral_id" : "pqMw40oPTUmBoHsyTAz9cg",
"transport_address" : "127.0.0.1:9301",
"attributes" : {
"ml.machine_memory" : "17179869184",
"xpack.installed" : "true",
"ml.max_open_jobs" : "20"
}
}
},
#Value("$ELASTIC_HOST")
private String EsHost;
#Value("$ELASTIC_PORT")
private String EsPort;
#Bean
public ElasticsearchOperations elasticsearchTemplate() throws UnknownHostException {
return new ElasticsearchTemplate(elasticsearchClient());
}
#Bean
public Client elasticsearchClient() throws UnknownHostException {
Settings settings = Settings.builder()
.put("client.transport.sniff", true).build();
TransportClient client = new PreBuiltTransportClient(settings);
client.addTransportAddress( new TransportAddress(InetAddress.getByName(EsHost), Integer.valueOf(EsPort));
return client;
}
Tried all the above ways to get a host and port ALSO TRIED WITH 9300 but still no luck. Also, my elasticsearch.yml is the default file and did not add any explicit host or ports.
Docker-compose
version: '3'
services:
elastic:
restart: always
image: docker.elastic.co/elasticsearch/elasticsearch:6.2.2
environment:
- cluster.name=elasticsearch
- node.name=es01
- discovery.type=single-node
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
ports:
- "9201:9200"
- "9301:9300"
db:
restart: always
image: postgres
ports:
- "5432:5432"
environment:
POSTGRES_PASSWORD: 'xxx'
POSTGRES_USER: 'xx'
POSTGRES_DB: 'xx'
api:
build:
context: .
dockerfile: Dockerfile
ports:
- "8080:8080"
environment:
ENVIRONMENT_NAME: "dev"
REGION_NAME: "local"
POSTGRES_PASSWORD: "xx"
POSTGRES_USER: "xx"
POSTGRES_HOST: "db"
ELASTIC_HOST: "elastic"
ELASTIC_PORT: "9200"
depends_on:
- db
- elastic
ERROR:
"failed to load elasticsearch nodes : org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes are available: [{#transport#-1}{JjFZc4y-RBCYbdELAsgaAQ}{elastic}{172.20.0.2:9200}]"}
It works if I change this to
environment:
ENVIRONMENT_NAME: "dev"
REGION_NAME: "local"
POSTGRES_PASSWORD: "xxx"
POSTGRES_USER: "xx"
POSTGRES_HOST: "db"
ELASTIC_HOST: "elastic"
ELASTIC_PORT: "9300" --> this is changed from 9200
client.addTransportAddress(new TransportAddress(InetAddress.getLocalHost(), 9201));
No, idea why !!
Spring Data Elasticsearch 3.1.11 is built with Elasticsearch client libraries in version 6.2.2. So even if you manage to get a connection to the cluster, the chances are very high, that the client and the cluster can't communicate properly.
As for the setup of the connection: You should add the name of the cluster you want to connect to into the settings:
Settings settings = Settings.builder()
.put("client.transport.sniff", true)
.put("cluster.name", "elasticsearch")
.build();

Docker - ELK stack -- "Elasticsearch appears to be uneachable or down"

So I am using docker-compose to launch the ELK stack, which will be filled by filebeats... my config is something like this:
elasticsearch:
image: elasticsearch:latest
command: elasticsearch -Des.network.host=_non_loopback_
ports:
- "9200:9200"
- "9300:9300"
logstash:
image: logstash:latest
command: logstash -f /etc/logstash/conf.d/logstash.conf -b 10000 -w 1
volumes:
- ./logstash/config:/etc/logstash/conf.d
ports:
- "5044:5044"
links:
- elasticsearch
environment:
- LS_HEAP_SIZE=2048m
kibana:
build: kibana/
volumes:
- ./kibana/config/:/opt/kibana/config/
ports:
- "5601:5601"
links:
- elasticsearch
My logstash.conf file looks something like this:
input {
beats {
port => 5044
}
}
....
output {
elasticsearch {
hosts => "localhost:9200"
manage_template => false
index => "%{[#metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[#metadata][type]}"
}
}
These docker containers are running on the same instance and I have confirmed being able to hit both ports externally.
The error which appears during a sync of a file from filebeat is:
logstash_1 | {:timestamp=>"2016-05-19T19:52:55.167000+0000", :message=>"Attempted to send a bulk request to Elasticsearch configured at '[\"http://localhost:9200/\"]', but Elasticsearch appears to be unreachable or down!", :error_message=>"Connection refused", :class=>"Manticore::SocketException", :client_config=>{:hosts=>["http://localhost:9200/"], :ssl=>nil, :transport_options=>{:socket_timeout=>0, :request_timeout=>0, :proxy=>nil, :ssl=>{}}, :transport_class=>Elasticsearch::Transport::Transport::HTTP::Manticore, :logger=>nil, :tracer=>nil, :reload_connections=>false, :retry_on_failure=>false, :reload_on_failure=>false, :randomize_hosts=>false, :http=>{:scheme=>"http", :user=>nil, :password=>nil, :port=>9200}}, :level=>:error}
Thanks,
You try to reach elasticsearch on localhost, but it's not possible, in this case localhost is the docker container containing logstash.
You have to access it via the link :
output {
elasticsearch {
hosts => "elasticsearch:9200"
manage_template => false
index => "%{[#metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[#metadata][type]}"
}
}
or, if you want to access your elasticsearch instance from "outside" instead of localhost, fill your ip (not 127.0.0.1)

Resources