I use logstash by logstash:7.9.1 image and i get this error when I up docker-compose and I dont know what to do with this (I try to make my logstash config wrong and connect it to the wrong elastic port but my docker still connect to 9200 and so I think it dosent read its data from my logstash config) pls help meeeee!!!!
my error:
[logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elasticsearch:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :error=>"Got response code '401' contacting Elasticsearch at URL 'http://elasticsearch:9200/'"}
my docker-compose:
zookeeper:
image: wurstmeister/zookeeper:3.4.6
container_name: zookeeper
ports:
- 2181:2181
networks:
- bardz
kafka:
image: wurstmeister/kafka:2.11-1.1.0
container_name: kafka
depends_on:
- zookeeper
environment:
KAFKA_ADVERTISED_HOST_NAME: kafka
KAFKA_CREATE_TOPICS: logs-topic:1:1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
ports:
- 9092:9092
volumes:
- kofka-volume:/var/run/docker.sock
networks:
- bardz
elasticsearch:
build:
context: elk/elasticsearch/
args:
ELK_VERSION: "7.9.1"
volumes:
- type: bind
source: ./elk/elasticsearch/config/elasticsearch.yml
target: /usr/share/elasticsearch/config/elasticsearch.yml
read_only: true
- type: volume
source: elasticsearch
target: /usr/share/elasticsearch/data
ports:
- "9200:9200"
- "9300:9300"
environment:
ES_JAVA_OPTS: "-Xmx256m -Xms256m"
ELASTIC_PASSWORD: changeme
# Use single node discovery in order to disable production mode and avoid bootstrap checks
# see https://www.elastic.co/guide/en/elasticsearch/reference/current/bootstrap-checks.html
discovery.type: single-node
networks:
- bardz
logstash:
image: logstash:7.9.1
restart: on-failure
ports:
- "5000:5000/tcp"
- "5000:5000/udp"
- "9600:9600"
volumes:
- logstash_data:/bitnami
- ./elk/logstash/logstash-kafka.conf:/opt/bitnami/logstash/config/logstash-kafka.conf
environment:
LOGSTASH_CONF_FILENAME: logstash-kafka.conf
networks:
- bardz
depends_on:
- elasticsearch
networks:
bardz:
external: true
driver: bridge
volumes:
elasticsearch:
zipkin-volume:
kofka-volume:
logstash_data:
my logstash config:
input {
kafka {
bootstrap_servers => "kafka:9092"
topics => ["logs-topic"]
}
}
output {
elasticsearch {
hosts => ["elasticsearch:9200"]
user => elastic
password => changeme
index => "logs-topic"
workers => 1
}
}
You are using the wrong password of elastic user in 7.9 which is changed from changeme to password as shown in ES contribution doc, but I tried and this seems to work only when you are running ES from source code.
Anyway you are getting 401 means unauth access and you can read more about it here,
As you are not running ES code from source, would advise you to follow the steps mentioned in this thread to change the password and as you are running it in docker, you need to go inside the docker conatainer by docker exec -it <cont-id> /bin/bash and than run the command mentioned in thread to set your own password.
Context: I want to use XPACK in order to control which user can see which Dasboard only with free version.
I downloaded Kibana 7.7.0 zip from here, installed it and I can see Security options to create users/roles. In fact, I created an index, an user and a role and successfully set the index to this role with this installed Elastic/Kibana in my Windows.
The issue happeans only with Elastic/Kibana started from docker. I started Kibana 7.7.0 from a docker and I can't see Security panel under Management page. Googling I found I must use Basic version Instead of Open Source. As far as I can see, the docker-compose bellow is downloading Basic version since there isn't "sso" at the end. Also I must use installers provided by Elastic instad of Apache. Well, as far as I see it is pulling image not related to Apache.
I am not sure if the issue is only with Kibana since I could enable xpack security on Elastic and run elasticsearch-setup-passwords interactive inside the elastic docker container. I can log ing in Kibana with Elastic user but I don't see Security tab under Management.
Also, I am getting issue from LogStash trying to connect to ElasticSearch even though I set the logstash_system (see logstash.conf bellow).
You can see that I have enabled xpack.security.enabled=true on ElasticSearch.
docker-compose.yml
version: '3.2'
services:
zoo1:
image: elevy/zookeeper:latest
environment:
MYID: 1
SERVERS: zoo1
ports:
- "2181:2181"
kafka1:
image: wurstmeister/kafka
command: [start-kafka.sh]
depends_on:
- zoo1
links:
- zoo1
ports:
- "9092:9092"
environment:
KAFKA_LISTENERS: PLAINTEXT://:9092
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka1:9092
KAFKA_BROKER_ID: 1
KAFKA_ADVERTISED_PORT: 9092
KAFKA_LOG_RETENTION_HOURS: "168"
KAFKA_LOG_RETENTION_BYTES: "100000000"
KAFKA_ZOOKEEPER_CONNECT: zoo1:2181
KAFKA_CREATE_TOPICS: "log:1:1"
KAFKA_AUTO_CREATE_TOPICS_ENABLE: 'true'
filebeat:
image: docker.elastic.co/beats/filebeat:7.7.0
command: filebeat -e -strict.perms=false
volumes:
- "//c/Users/my-comp/docker_folders/filebeat.yml:/usr/share/filebeat/filebeat.yml:ro"
- "//c/Users/my-comp/docker_folders/sample-logs:/sample-logs"
links:
- kafka1
depends_on:
- kafka1
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.7.0
environment:
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- xpack.security.enabled=true
- discovery.type=single-node
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- "//c/Users/my-comp/docker_folders/esdata:/usr/share/elasticsearch/data"
ports:
- "9200:9200"
kibana:
image: docker.elastic.co/kibana/kibana:7.7.0
volumes:
- "//c/Users/my-comp/docker_folders/kibana.yml:/usr/share/kibana/config/kibana.yml"
restart: always
environment:
- SERVER_NAME=kibana.localhost
- ELASTICSEARCH_HOSTS=http://x.x.x.x:9200
ports:
- "5601:5601"
links:
- elasticsearch
depends_on:
- elasticsearch
logstash:
image: docker.elastic.co/logstash/logstash:7.7.0
volumes:
- "//c/Users/my-comp/docker_folders/logstash.conf:/config-dir/logstash.conf"
restart: always
command: logstash -f /config-dir/logstash.conf
ports:
- "9600:9600"
- "7777:7777"
links:
- elasticsearch
- kafka1
kibana.yml
server.name: kibana
server.host: "0"
xpack.monitoring.ui.container.elasticsearch.enabled: false
elasticsearch.ssl.verificationMode: none
elasticsearch.username: "kibana"
elasticsearch.password: "k12345"
logstash.conf
input{
kafka{
codec => "json"
bootstrap_servers => "kafka1:9092"
topics => ["app_logs","request_logs"]
tags => ["myapp"]
}
}
filter {
*** not relevant
}
output {
elasticsearch {
hosts => ["http://x.x.x.x:9200"]
index => "%{[fields][topic_name]}-%{+YYYY.MM.dd}"
user => "logstash_system"
password => "l12345"
}
}
In case it is worth to mention, LogStash is failling to connect to ElasticSearch with this log and, as you can see from logstash.conf I set up logstash_system (the user created from elasticsearch-setup-passwords interactive)
logstash_1 | [2020-05-19T20:18:45,559][WARN ][logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elasticsearch:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :error=>"Got response code '401' contacting Elasticsearch at URL 'http://elasticsearch:9200/'"}
logstash_1 | [2020-05-19T20:19:13,815][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"Got response code '401' contacting Elasticsearch at URL 'http://elasticsearch:9200/_xpack'"}
So, my straight question is: am I missing some extra configuration in order to enable Security on Kibana? Surrounding questions are: is Kibana/Elastic from docker not the same from zip file? Am I missing some extra configurationin order to allow Logstash to connect to ElasticSearch
*** edited
LogStash is still failling to connect to ElasticSearch after I changed to
logstash.conf
...
output {
elasticsearch {
#hosts => [ "${ELASTIC_HOST1}", "${ELASTIC_HOST2}", "${ELASTIC_HOST3}" ]
#hosts => ["http://192.168.99.100:9200"]
index => "%{[fields][topic_name]}-%{+YYYY.MM.dd}"
xpack.monitoring.elasticsearch.hosts: ["http://192.168.99.100:9200"]
xpack.monitoring.elasticsearch.username: "logstash_system"
xpack.monitoring.elasticsearch.password: => "l12345"
}
}
The logs are
logstash_1 | WARNING: All illegal access operations will be denied in a future release
logstash_1 | Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
logstash_1 | [2020-05-20T13:39:05,095][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
logstash_1 | [2020-05-20T13:39:05,120][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.7.0"}
logstash_1 | [2020-05-20T13:39:06,134][WARN ][logstash.monitoringextension.pipelineregisterhook] xpack.monitoring.enabled has not been defined, but found elasticsearch configuration. Please explicitly set `xpack.monitoring.enabled: true` in logstash.yml
logstash_1 | [2020-05-20T13:39:06,150][WARN ][deprecation.logstash.monitoringextension.pipelineregisterhook] Internal collectors option for Logstash monitoring is deprecated and targeted for removal in the next major version.
logstash_1 | Please configure Metricbeat to monitor Logstash. Documentation can be found at:
logstash_1 | https://www.elastic.co/guide/en/logstash/current/monitoring-with-metricbeat.html
logstash_1 | [2020-05-20T13:39:08,008][INFO ][logstash.licensechecker.licensereader] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
logstash_1 | [2020-05-20T13:39:08,408][WARN ][logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elasticsearch:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :error=>"Got response code '401' contacting Elasticsearch at URL 'http://elasticsearch:9200/'"}
logstash_1 | [2020-05-20T13:39:08,506][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"Got response code '401' contacting Elasticsearch at URL 'http://elasticsearch:9200/_xpack'"}
filebeat_1 | 2020-05-20T13:38:53.069Z INFO log/harvester.go:297 Harvester started for file: /sample-logs/request-2019-11-17F.log
logstash_1 | [2020-05-20T13:39:08,611][ERROR][logstash.monitoring.internalpipelinesource] Failed to fetch X-Pack information from Elasticsearch. This is likely due to failure to reach a live Elasticsearch cluster.
logstash_1 | [2020-05-20T13:39:11,449][ERROR][logstash.agent ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::ConfigurationError", :message=>"Expected one of [A-Za-z0-9_-], [ \\t\\r\\n], \"#\", \"=>\" at line 86, column 7 (byte 2771) after output {\r\n elasticsearch {\r\n #hosts => [ \"${ELASTIC_HOST1}\", \"${ELASTIC_HOST2}\", \"${ELASTIC_HOST3}\" ]\r\n\t#hosts => [\"http://192.168.99.100:9200\"]\r\n index => \"%{[fields][topic_name]}-%{+YYYY.MM.dd}\"\r\n\txpack", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:58:in `compile_imperative'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:66:in `compile_graph'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:28:in `block in compile_sources'", "org/jruby/RubyArray.java:2577:in `map'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:27:in `compile_sources'", "org/logstash/execution/AbstractPipelineExt.java:181:in `initialize'", "org/logstash/execution/JavaBasePipelineExt.java:67:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:43:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:52:in `execute'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:342:in `block in converge_state'"]}
I guess the most relevant part of this log is:
logstash_1 | [2020-05-20T13:39:08,008][INFO ][logstash.licensechecker.licensereader] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
logstash_1 | [2020-05-20T13:39:08,408][WARN ][logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elasticsearch:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :error=>"Got response code '401' contacting Elasticsearch at URL 'http://elasticsearch:9200/'"}
logstash_1 | [2020-05-20T13:39:08,506][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"Got response code '401' contacting Elasticsearch at URL 'http://elasticsearch:9200/_xpack'"}
Take a note it is failling with ""Got response code '401' contacting Elasticsearch at URL 'http://elasticsearch:9200/_xpack'" error. I guess that in my particular docker setups it demands to be the Docker Machine IP which in my case is 192.168.99.100. Is there someway to replace elasticsearch by this IP?
I am experimenting with some json that has been formatted in accordance with Elasticsearch, so I have gone directly from Filebeat to Elasticsearch, as opposed to going through Logstash. This is using docker-compose:
version: '2.2'
services:
elasticsearch:
container_name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:7.5.2
ports:
- 9200:9200
- 9300:9300
environment:
- discovery.type=single-node
- cluster.name=docker-
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
networks:
- esnet
filebeat:
container_name: filebeat
build:
context: .
dockerfile: filebeat.Dockerfile
volumes:
- ./logs:/var/log
- ./filebeat/filebeat.yml:/usr/share/filebeat/filebeat.yml
networks:
- esnet
elastichq:
container_name: elastichq
image: elastichq/elasticsearch-hq
ports:
- 8080:5000
environment:
- HQ_DEFAULT_URL=http://elasticsearch:9200
- HQ_ENABLE_SSL=False
- HQ_DEBUG=FALSE
networks:
- esnet
networks:
esnet:
However, when I open ElasticHQ the index name has been labeled as filebeat-7.5.2-2020.02.10-000001 with a date stamp. I have specified the index name as Sample in my filebeat.yml. Is there something I am missing, or is this behavior normal?
Here is my filebeat.yml
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/*.json
json.keys_under_root: true
json.add_error_key: true
#----------------------------- Elasticsearch output --------------------------------
output.elasticsearch:
hosts: ["elasticsearch:9200"]
index: "sample-%{+YYYY.MM.dd}"
setup.template.name: "sample"
setup.template.pattern: "sample-*"
It would be more practical to know something predefined so if I use Postman as opposed to ElasticHQ, I can start querying my data without having to look for the index name.
I think Filebeat ILM might be taking over instead of the configured index name.
Starting with version 7.0, Filebeat uses index lifecycle management by
default when it connects to a cluster that supports lifecycle
management. Filebeat loads the default policy automatically and
applies it to any indices created by Filebeat.
And when ilm is enabled Filebeat Elasticsearch output index settings are ignored
The index setting is ignored when index lifecycle management is
enabled. If you’re sending events to a cluster that supports index
lifecycle management, see Configure index lifecycle management to
learn how to change the index name.
You might need to disable ILM or better yet configure your desired filename using ILM rollover_alias.
I'm trying to run Elasticsearch on an docker swarm. It works as a single node cluster for now, but only when the transport.host=localhost setting is included. Here is main part of docker-compose.yml:
version: "3"
services:
elasticsearch:
image: "elasticsearch:7.4.1" #(base version)
hostname: elasticsearch
ports:
- "9200:9200"
environment:
- cluster.name=elasticsearch
- bootstrap.memory_lock=true
- ES_JAVA_OPTS=-Xms512m -Xmx512m
- transport.host=localhost
volumes:
- "./elasticsearch/volumes:/usr/share/elasticsearch/data"
networks:
- logger_net
volumes:
logging:
networks:
logger_net:
external: true
Above configuration results in the yellow cluster state (because some indexes require additional replica).
Elasticsearch status page is unavailable when I'm using IP of the elasticsearch docker container in a transport.host setting or without a transport.host=localhost setting.
I think that using a transport.host=localhost setting is wrong. Is proper configuration of Elasticsearch in docker swarm available?
I have a logstash instance running with docker-compose on an AWS EC2 (AMI) instance. I have mounted a folder as volume to the container. I have logstash pipeline config to write the sincedb file in the mounted folder. The pipeline runs but it doesn't write anything for the sincedb file.
The same configuration works on my local machine, but not on EC2. I have checked that the user has rights to write in the folder by creating a file there (eg: vi test).
Docker compose config:
version: "2"
services:
logstash:
image: docker.elastic.co/logstash/logstash:7.2.0
volumes:
- ./logstash/pipeline/:/usr/share/logstash/pipeline/
- ./logstash/settings/logstash.yml:/usr/share/logstash/config/logstash.yml
- ../data/:/usr/data/:rw
- ./logstash/templates/:/usr/share/logstash/templates/
container_name: logstash
ports:
- 9600:9600
env_file:
- ../env/.env.logstash
Logstash input:
input{
s3 {
access_key_id => "${AWS_ACCESS_KEY}"
bucket => "xyz-bucket"
secret_access_key => "${AWS_SECRET_KEY}"
region => "eu-west-1"
prefix => "logs/"
type => "log"
codec => "json"
sincedb_path => "/usr/data/log-sincedb.file"
}
}
I've fixed this. Had to explicitly add user:root in the docker-compose service config.
version: "2"
services:
logstash:
image: docker.elastic.co/logstash/logstash:7.2.0
user: root
volumes:
- ./logstash/pipeline/:/usr/share/logstash/pipeline/
- ./logstash/settings/logstash.yml:/usr/share/logstash/config/logstash.yml
- ../data/:/usr/data/:rw
- ./logstash/templates/:/usr/share/logstash/templates/
container_name: logstash
ports:
- 9600:9600
env_file:
- ../env/.env.logstash