Logstash pipeline not showing on Kibana, but logs show Pipelines running - elasticsearch

Trying to set up elastic search, kibana and logstash to read logs from local folder.
It works well on version 7.x.x, but when I try to upgrade to 8 it doesn't.Fx
I am using this YAML file:
version: '3.6'
services:
Elasticsearch:
image: elasticsearch:8.4.0
container_name: elasticsearch
volumes:
- elastic_data:/usr/share/elasticsearch/data/
environment:
- discovery.type=single-node
- xpack.license.self_generated.type=basic
- xpack.security.enabled=false
ports:
- '9200:9200'
- '9300:9300'
networks:
- elk
Logstash:
image: logstash:8.4.0
container_name: logstash
environment:
- ELASTICSEARCH_HOSTS=http://elasticsearch:9200
- xpack.monitoring.enabled=true
volumes:
- ./logstash/:/logstash
- D:/test/Logs/:/test/Logs
command: logstash -f /logstash/logstash.conf
depends_on:
- Elasticsearch
ports:
- '9600:9600'
networks:
- elk
Kibana:
image: kibana:8.4.0
container_name: kibana
ports:
- '5601:5601'
environment:
- ELASTICSEARCH_URL=http://elasticsearch:9200
depends_on:
- Elasticsearch
networks:
- elk
volumes:
elastic_data: {}
networks:
elk:
and config for logstash:
input {
file {
path => "/test/Logs/test.slog"
start_position => "beginning"
}
}
output {
elasticsearch {
hosts => ["elasticsearch:9200"]
}
}
test.slog exist and contain logs.
the logstash docker show the following logs:
[2022-08-27T20:40:32,592][INFO ][logstash.outputs.elasticsearch][main] Installing Elasticsearch template {:name=>"ecs-logstash"}
[2022-08-27T20:40:33,450][INFO ][logstash.javapipeline ][main] Pipeline Java execution initialization time {"seconds"=>0.95}
[2022-08-27T20:40:33,451][INFO ][logstash.javapipeline ][.monitoring-logstash] Pipeline Java execution initialization time {"seconds"=>0.94}
[2022-08-27T20:40:33,516][INFO ][logstash.javapipeline ][.monitoring-logstash] Pipeline started {"pipeline.id"=>".monitoring-logstash"}
[2022-08-27T20:40:33,532][INFO ][logstash.inputs.file ][main] No sincedb_path set, generating one based on the "path" setting {:sincedb_path=>"/usr/share/logstash/data/plugins/inputs/file/.sincedb_327fd1919fa26d08ec354604c3e1a1ce", :path=>["/test/Logs/test.slog"]}
[2022-08-27T20:40:33,559][INFO ][logstash.javapipeline ][main] Pipeline started {"pipeline.id"=>"main"}
[2022-08-27T20:40:33,614][INFO ][filewatch.observingtail ][main][8992bf4e2fad9d8838262d3019319d02ab5ffdcb5b282e821574485618753ce9] START, creating Discoverer, Watch with file and sincedb collections
[2022-08-27T20:40:33,625][INFO ][logstash.agent ] Pipelines running {:count=>2, :running_pipelines=>[:".monitoring-logstash", :main], :non_running_pipelines=>[]}
But when I go to the Data -> Index Management there is nothing. and also in the Ingest pipeline.
What am I doing wrong?

In Elasticsearch 8 the index names created by logstash output follow the pattern .ds-logs-generic-default-%{+yyyy.MM.dd} instead of logstash-%{+yyyy.MM.dd}
This .ds index does not appear under Data -> Index Management but the documents can be queried
You can view the .ds-logs-generic index in Kibana, Management> Dev Tools using
GET _cat/indices
To query the documents you can use the _search API
GET /.ds-logs-generic-default-2022.08.28-000001/_search
{
"query": {
"match_all": {}
}
}
If you want to specify the index name you can add it to the output section of your logstash.conf eg index => "logstash-%{+YYYY.MM.dd}"
output {
elasticsearch {
hosts => ["elasticsearch:9200"]
index => "logstash-%{+YYYY.MM.dd}"
}
}
The newly created index will show in Kibana under Management > Data > Index Management. You may need to add a few log lines at the end of your logfile to kick the indexing pipeline.

Related

BadResponseCodeError, :error=>"Got response code '401' contacting Elasticsearch at URL

I use logstash by logstash:7.9.1 image and i get this error when I up docker-compose and I dont know what to do with this (I try to make my logstash config wrong and connect it to the wrong elastic port but my docker still connect to 9200 and so I think it dosent read its data from my logstash config) pls help meeeee!!!!
my error:
[logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elasticsearch:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :error=>"Got response code '401' contacting Elasticsearch at URL 'http://elasticsearch:9200/'"}
my docker-compose:
zookeeper:
image: wurstmeister/zookeeper:3.4.6
container_name: zookeeper
ports:
- 2181:2181
networks:
- bardz
kafka:
image: wurstmeister/kafka:2.11-1.1.0
container_name: kafka
depends_on:
- zookeeper
environment:
KAFKA_ADVERTISED_HOST_NAME: kafka
KAFKA_CREATE_TOPICS: logs-topic:1:1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
ports:
- 9092:9092
volumes:
- kofka-volume:/var/run/docker.sock
networks:
- bardz
elasticsearch:
build:
context: elk/elasticsearch/
args:
ELK_VERSION: "7.9.1"
volumes:
- type: bind
source: ./elk/elasticsearch/config/elasticsearch.yml
target: /usr/share/elasticsearch/config/elasticsearch.yml
read_only: true
- type: volume
source: elasticsearch
target: /usr/share/elasticsearch/data
ports:
- "9200:9200"
- "9300:9300"
environment:
ES_JAVA_OPTS: "-Xmx256m -Xms256m"
ELASTIC_PASSWORD: changeme
# Use single node discovery in order to disable production mode and avoid bootstrap checks
# see https://www.elastic.co/guide/en/elasticsearch/reference/current/bootstrap-checks.html
discovery.type: single-node
networks:
- bardz
logstash:
image: logstash:7.9.1
restart: on-failure
ports:
- "5000:5000/tcp"
- "5000:5000/udp"
- "9600:9600"
volumes:
- logstash_data:/bitnami
- ./elk/logstash/logstash-kafka.conf:/opt/bitnami/logstash/config/logstash-kafka.conf
environment:
LOGSTASH_CONF_FILENAME: logstash-kafka.conf
networks:
- bardz
depends_on:
- elasticsearch
networks:
bardz:
external: true
driver: bridge
volumes:
elasticsearch:
zipkin-volume:
kofka-volume:
logstash_data:
my logstash config:
input {
kafka {
bootstrap_servers => "kafka:9092"
topics => ["logs-topic"]
}
}
output {
elasticsearch {
hosts => ["elasticsearch:9200"]
user => elastic
password => changeme
index => "logs-topic"
workers => 1
}
}
You are using the wrong password of elastic user in 7.9 which is changed from changeme to password as shown in ES contribution doc, but I tried and this seems to work only when you are running ES from source code.
Anyway you are getting 401 means unauth access and you can read more about it here,
As you are not running ES code from source, would advise you to follow the steps mentioned in this thread to change the password and as you are running it in docker, you need to go inside the docker conatainer by docker exec -it <cont-id> /bin/bash and than run the command mentioned in thread to set your own password.

Kibana 7.7.0 Basic version: management tab missing Security panel when started from docker

Context: I want to use XPACK in order to control which user can see which Dasboard only with free version.
I downloaded Kibana 7.7.0 zip from here, installed it and I can see Security options to create users/roles. In fact, I created an index, an user and a role and successfully set the index to this role with this installed Elastic/Kibana in my Windows.
The issue happeans only with Elastic/Kibana started from docker. I started Kibana 7.7.0 from a docker and I can't see Security panel under Management page. Googling I found I must use Basic version Instead of Open Source. As far as I can see, the docker-compose bellow is downloading Basic version since there isn't "sso" at the end. Also I must use installers provided by Elastic instad of Apache. Well, as far as I see it is pulling image not related to Apache.
I am not sure if the issue is only with Kibana since I could enable xpack security on Elastic and run elasticsearch-setup-passwords interactive inside the elastic docker container. I can log ing in Kibana with Elastic user but I don't see Security tab under Management.
Also, I am getting issue from LogStash trying to connect to ElasticSearch even though I set the logstash_system (see logstash.conf bellow).
You can see that I have enabled xpack.security.enabled=true on ElasticSearch.
docker-compose.yml
version: '3.2'
services:
zoo1:
image: elevy/zookeeper:latest
environment:
MYID: 1
SERVERS: zoo1
ports:
- "2181:2181"
kafka1:
image: wurstmeister/kafka
command: [start-kafka.sh]
depends_on:
- zoo1
links:
- zoo1
ports:
- "9092:9092"
environment:
KAFKA_LISTENERS: PLAINTEXT://:9092
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka1:9092
KAFKA_BROKER_ID: 1
KAFKA_ADVERTISED_PORT: 9092
KAFKA_LOG_RETENTION_HOURS: "168"
KAFKA_LOG_RETENTION_BYTES: "100000000"
KAFKA_ZOOKEEPER_CONNECT: zoo1:2181
KAFKA_CREATE_TOPICS: "log:1:1"
KAFKA_AUTO_CREATE_TOPICS_ENABLE: 'true'
filebeat:
image: docker.elastic.co/beats/filebeat:7.7.0
command: filebeat -e -strict.perms=false
volumes:
- "//c/Users/my-comp/docker_folders/filebeat.yml:/usr/share/filebeat/filebeat.yml:ro"
- "//c/Users/my-comp/docker_folders/sample-logs:/sample-logs"
links:
- kafka1
depends_on:
- kafka1
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.7.0
environment:
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- xpack.security.enabled=true
- discovery.type=single-node
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- "//c/Users/my-comp/docker_folders/esdata:/usr/share/elasticsearch/data"
ports:
- "9200:9200"
kibana:
image: docker.elastic.co/kibana/kibana:7.7.0
volumes:
- "//c/Users/my-comp/docker_folders/kibana.yml:/usr/share/kibana/config/kibana.yml"
restart: always
environment:
- SERVER_NAME=kibana.localhost
- ELASTICSEARCH_HOSTS=http://x.x.x.x:9200
ports:
- "5601:5601"
links:
- elasticsearch
depends_on:
- elasticsearch
logstash:
image: docker.elastic.co/logstash/logstash:7.7.0
volumes:
- "//c/Users/my-comp/docker_folders/logstash.conf:/config-dir/logstash.conf"
restart: always
command: logstash -f /config-dir/logstash.conf
ports:
- "9600:9600"
- "7777:7777"
links:
- elasticsearch
- kafka1
kibana.yml
server.name: kibana
server.host: "0"
xpack.monitoring.ui.container.elasticsearch.enabled: false
elasticsearch.ssl.verificationMode: none
elasticsearch.username: "kibana"
elasticsearch.password: "k12345"
logstash.conf
input{
kafka{
codec => "json"
bootstrap_servers => "kafka1:9092"
topics => ["app_logs","request_logs"]
tags => ["myapp"]
}
}
filter {
*** not relevant
}
output {
elasticsearch {
hosts => ["http://x.x.x.x:9200"]
index => "%{[fields][topic_name]}-%{+YYYY.MM.dd}"
user => "logstash_system"
password => "l12345"
}
}
In case it is worth to mention, LogStash is failling to connect to ElasticSearch with this log and, as you can see from logstash.conf I set up logstash_system (the user created from elasticsearch-setup-passwords interactive)
logstash_1 | [2020-05-19T20:18:45,559][WARN ][logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elasticsearch:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :error=>"Got response code '401' contacting Elasticsearch at URL 'http://elasticsearch:9200/'"}
logstash_1 | [2020-05-19T20:19:13,815][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"Got response code '401' contacting Elasticsearch at URL 'http://elasticsearch:9200/_xpack'"}
So, my straight question is: am I missing some extra configuration in order to enable Security on Kibana? Surrounding questions are: is Kibana/Elastic from docker not the same from zip file? Am I missing some extra configurationin order to allow Logstash to connect to ElasticSearch
*** edited
LogStash is still failling to connect to ElasticSearch after I changed to
logstash.conf
...
output {
elasticsearch {
#hosts => [ "${ELASTIC_HOST1}", "${ELASTIC_HOST2}", "${ELASTIC_HOST3}" ]
#hosts => ["http://192.168.99.100:9200"]
index => "%{[fields][topic_name]}-%{+YYYY.MM.dd}"
xpack.monitoring.elasticsearch.hosts: ["http://192.168.99.100:9200"]
xpack.monitoring.elasticsearch.username: "logstash_system"
xpack.monitoring.elasticsearch.password: => "l12345"
}
}
The logs are
logstash_1 | WARNING: All illegal access operations will be denied in a future release
logstash_1 | Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
logstash_1 | [2020-05-20T13:39:05,095][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
logstash_1 | [2020-05-20T13:39:05,120][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.7.0"}
logstash_1 | [2020-05-20T13:39:06,134][WARN ][logstash.monitoringextension.pipelineregisterhook] xpack.monitoring.enabled has not been defined, but found elasticsearch configuration. Please explicitly set `xpack.monitoring.enabled: true` in logstash.yml
logstash_1 | [2020-05-20T13:39:06,150][WARN ][deprecation.logstash.monitoringextension.pipelineregisterhook] Internal collectors option for Logstash monitoring is deprecated and targeted for removal in the next major version.
logstash_1 | Please configure Metricbeat to monitor Logstash. Documentation can be found at:
logstash_1 | https://www.elastic.co/guide/en/logstash/current/monitoring-with-metricbeat.html
logstash_1 | [2020-05-20T13:39:08,008][INFO ][logstash.licensechecker.licensereader] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
logstash_1 | [2020-05-20T13:39:08,408][WARN ][logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elasticsearch:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :error=>"Got response code '401' contacting Elasticsearch at URL 'http://elasticsearch:9200/'"}
logstash_1 | [2020-05-20T13:39:08,506][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"Got response code '401' contacting Elasticsearch at URL 'http://elasticsearch:9200/_xpack'"}
filebeat_1 | 2020-05-20T13:38:53.069Z INFO log/harvester.go:297 Harvester started for file: /sample-logs/request-2019-11-17F.log
logstash_1 | [2020-05-20T13:39:08,611][ERROR][logstash.monitoring.internalpipelinesource] Failed to fetch X-Pack information from Elasticsearch. This is likely due to failure to reach a live Elasticsearch cluster.
logstash_1 | [2020-05-20T13:39:11,449][ERROR][logstash.agent ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::ConfigurationError", :message=>"Expected one of [A-Za-z0-9_-], [ \\t\\r\\n], \"#\", \"=>\" at line 86, column 7 (byte 2771) after output {\r\n elasticsearch {\r\n #hosts => [ \"${ELASTIC_HOST1}\", \"${ELASTIC_HOST2}\", \"${ELASTIC_HOST3}\" ]\r\n\t#hosts => [\"http://192.168.99.100:9200\"]\r\n index => \"%{[fields][topic_name]}-%{+YYYY.MM.dd}\"\r\n\txpack", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:58:in `compile_imperative'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:66:in `compile_graph'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:28:in `block in compile_sources'", "org/jruby/RubyArray.java:2577:in `map'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:27:in `compile_sources'", "org/logstash/execution/AbstractPipelineExt.java:181:in `initialize'", "org/logstash/execution/JavaBasePipelineExt.java:67:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:43:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:52:in `execute'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:342:in `block in converge_state'"]}
I guess the most relevant part of this log is:
logstash_1 | [2020-05-20T13:39:08,008][INFO ][logstash.licensechecker.licensereader] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
logstash_1 | [2020-05-20T13:39:08,408][WARN ][logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elasticsearch:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :error=>"Got response code '401' contacting Elasticsearch at URL 'http://elasticsearch:9200/'"}
logstash_1 | [2020-05-20T13:39:08,506][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"Got response code '401' contacting Elasticsearch at URL 'http://elasticsearch:9200/_xpack'"}
Take a note it is failling with ""Got response code '401' contacting Elasticsearch at URL 'http://elasticsearch:9200/_xpack'" error. I guess that in my particular docker setups it demands to be the Docker Machine IP which in my case is 192.168.99.100. Is there someway to replace elasticsearch by this IP?

Index Name Not Being Set in Filebeat to Elasticsearch - ELK .NET Docker ElasticHQ

I am experimenting with some json that has been formatted in accordance with Elasticsearch, so I have gone directly from Filebeat to Elasticsearch, as opposed to going through Logstash. This is using docker-compose:
version: '2.2'
services:
elasticsearch:
container_name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:7.5.2
ports:
- 9200:9200
- 9300:9300
environment:
- discovery.type=single-node
- cluster.name=docker-
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
networks:
- esnet
filebeat:
container_name: filebeat
build:
context: .
dockerfile: filebeat.Dockerfile
volumes:
- ./logs:/var/log
- ./filebeat/filebeat.yml:/usr/share/filebeat/filebeat.yml
networks:
- esnet
elastichq:
container_name: elastichq
image: elastichq/elasticsearch-hq
ports:
- 8080:5000
environment:
- HQ_DEFAULT_URL=http://elasticsearch:9200
- HQ_ENABLE_SSL=False
- HQ_DEBUG=FALSE
networks:
- esnet
networks:
esnet:
However, when I open ElasticHQ the index name has been labeled as filebeat-7.5.2-2020.02.10-000001 with a date stamp. I have specified the index name as Sample in my filebeat.yml. Is there something I am missing, or is this behavior normal?
Here is my filebeat.yml
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/*.json
json.keys_under_root: true
json.add_error_key: true
#----------------------------- Elasticsearch output --------------------------------
output.elasticsearch:
hosts: ["elasticsearch:9200"]
index: "sample-%{+YYYY.MM.dd}"
setup.template.name: "sample"
setup.template.pattern: "sample-*"
It would be more practical to know something predefined so if I use Postman as opposed to ElasticHQ, I can start querying my data without having to look for the index name.
I think Filebeat ILM might be taking over instead of the configured index name.
Starting with version 7.0, Filebeat uses index lifecycle management by
default when it connects to a cluster that supports lifecycle
management. Filebeat loads the default policy automatically and
applies it to any indices created by Filebeat.
And when ilm is enabled Filebeat Elasticsearch output index settings are ignored
The index setting is ignored when index lifecycle management is
enabled. If you’re sending events to a cluster that supports index
lifecycle management, see Configure index lifecycle management to
learn how to change the index name.
You might need to disable ILM or better yet configure your desired filename using ILM rollover_alias.

Why elasticsearch on docker swarm requires a transport.host=localhost setting?

I'm trying to run Elasticsearch on an docker swarm. It works as a single node cluster for now, but only when the transport.host=localhost setting is included. Here is main part of docker-compose.yml:
version: "3"
services:
elasticsearch:
image: "elasticsearch:7.4.1" #(base version)
hostname: elasticsearch
ports:
- "9200:9200"
environment:
- cluster.name=elasticsearch
- bootstrap.memory_lock=true
- ES_JAVA_OPTS=-Xms512m -Xmx512m
- transport.host=localhost
volumes:
- "./elasticsearch/volumes:/usr/share/elasticsearch/data"
networks:
- logger_net
volumes:
logging:
networks:
logger_net:
external: true
Above configuration results in the yellow cluster state (because some indexes require additional replica).
Elasticsearch status page is unavailable when I'm using IP of the elasticsearch docker container in a transport.host setting or without a transport.host=localhost setting.
I think that using a transport.host=localhost setting is wrong. Is proper configuration of Elasticsearch in docker swarm available?

logstash sincedb file not created with docker-compose

I have a logstash instance running with docker-compose on an AWS EC2 (AMI) instance. I have mounted a folder as volume to the container. I have logstash pipeline config to write the sincedb file in the mounted folder. The pipeline runs but it doesn't write anything for the sincedb file.
The same configuration works on my local machine, but not on EC2. I have checked that the user has rights to write in the folder by creating a file there (eg: vi test).
Docker compose config:
version: "2"
services:
logstash:
image: docker.elastic.co/logstash/logstash:7.2.0
volumes:
- ./logstash/pipeline/:/usr/share/logstash/pipeline/
- ./logstash/settings/logstash.yml:/usr/share/logstash/config/logstash.yml
- ../data/:/usr/data/:rw
- ./logstash/templates/:/usr/share/logstash/templates/
container_name: logstash
ports:
- 9600:9600
env_file:
- ../env/.env.logstash
Logstash input:
input{
s3 {
access_key_id => "${AWS_ACCESS_KEY}"
bucket => "xyz-bucket"
secret_access_key => "${AWS_SECRET_KEY}"
region => "eu-west-1"
prefix => "logs/"
type => "log"
codec => "json"
sincedb_path => "/usr/data/log-sincedb.file"
}
}
I've fixed this. Had to explicitly add user:root in the docker-compose service config.
version: "2"
services:
logstash:
image: docker.elastic.co/logstash/logstash:7.2.0
user: root
volumes:
- ./logstash/pipeline/:/usr/share/logstash/pipeline/
- ./logstash/settings/logstash.yml:/usr/share/logstash/config/logstash.yml
- ../data/:/usr/data/:rw
- ./logstash/templates/:/usr/share/logstash/templates/
container_name: logstash
ports:
- 9600:9600
env_file:
- ../env/.env.logstash

Resources