I can not log data trhough AWS EC2 logstash instance to AWS ES, while I do can log data with docker logstash service using same logstash.conf configuration file to AWS ES
Security group for EC2 logstash instance:
logstash.conf
input {
udp {
port => 8089
codec => json
}
elasticsearch {
...
}
}
...
running logstash:
....
[INFO ] xxxxxxxxxxxxxxxx [[main]<udp] udp - Starting UDP listener {:address=>"0.0.0.0:8089"}
[INFO ] xxxxxxxxxxxxxxxx [[main]<udp] udp - UDP listener started {:address=>"0.0.0.0:8089", :receive_buffer_bytes=>"106496", :queue_size=>"2000"}
[INFO ] xxxxxxxxxxxxxxxx [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
But running the logstash service with docker-compose.yml logs are sent to AWS ES. Below logstash service configuration from docker-compose.yml
logstash:
image: logstash:7.7.0
container_name: logstash
hostname: logstash
ports:
- 9600:9600
- 8089:8089
volumes:
- ./etc/infrastructure/logstash:/usr/share/logstash/pipeline
networks:
- api
Related
Trying to set up elastic search, kibana and logstash to read logs from local folder.
It works well on version 7.x.x, but when I try to upgrade to 8 it doesn't.Fx
I am using this YAML file:
version: '3.6'
services:
Elasticsearch:
image: elasticsearch:8.4.0
container_name: elasticsearch
volumes:
- elastic_data:/usr/share/elasticsearch/data/
environment:
- discovery.type=single-node
- xpack.license.self_generated.type=basic
- xpack.security.enabled=false
ports:
- '9200:9200'
- '9300:9300'
networks:
- elk
Logstash:
image: logstash:8.4.0
container_name: logstash
environment:
- ELASTICSEARCH_HOSTS=http://elasticsearch:9200
- xpack.monitoring.enabled=true
volumes:
- ./logstash/:/logstash
- D:/test/Logs/:/test/Logs
command: logstash -f /logstash/logstash.conf
depends_on:
- Elasticsearch
ports:
- '9600:9600'
networks:
- elk
Kibana:
image: kibana:8.4.0
container_name: kibana
ports:
- '5601:5601'
environment:
- ELASTICSEARCH_URL=http://elasticsearch:9200
depends_on:
- Elasticsearch
networks:
- elk
volumes:
elastic_data: {}
networks:
elk:
and config for logstash:
input {
file {
path => "/test/Logs/test.slog"
start_position => "beginning"
}
}
output {
elasticsearch {
hosts => ["elasticsearch:9200"]
}
}
test.slog exist and contain logs.
the logstash docker show the following logs:
[2022-08-27T20:40:32,592][INFO ][logstash.outputs.elasticsearch][main] Installing Elasticsearch template {:name=>"ecs-logstash"}
[2022-08-27T20:40:33,450][INFO ][logstash.javapipeline ][main] Pipeline Java execution initialization time {"seconds"=>0.95}
[2022-08-27T20:40:33,451][INFO ][logstash.javapipeline ][.monitoring-logstash] Pipeline Java execution initialization time {"seconds"=>0.94}
[2022-08-27T20:40:33,516][INFO ][logstash.javapipeline ][.monitoring-logstash] Pipeline started {"pipeline.id"=>".monitoring-logstash"}
[2022-08-27T20:40:33,532][INFO ][logstash.inputs.file ][main] No sincedb_path set, generating one based on the "path" setting {:sincedb_path=>"/usr/share/logstash/data/plugins/inputs/file/.sincedb_327fd1919fa26d08ec354604c3e1a1ce", :path=>["/test/Logs/test.slog"]}
[2022-08-27T20:40:33,559][INFO ][logstash.javapipeline ][main] Pipeline started {"pipeline.id"=>"main"}
[2022-08-27T20:40:33,614][INFO ][filewatch.observingtail ][main][8992bf4e2fad9d8838262d3019319d02ab5ffdcb5b282e821574485618753ce9] START, creating Discoverer, Watch with file and sincedb collections
[2022-08-27T20:40:33,625][INFO ][logstash.agent ] Pipelines running {:count=>2, :running_pipelines=>[:".monitoring-logstash", :main], :non_running_pipelines=>[]}
But when I go to the Data -> Index Management there is nothing. and also in the Ingest pipeline.
What am I doing wrong?
In Elasticsearch 8 the index names created by logstash output follow the pattern .ds-logs-generic-default-%{+yyyy.MM.dd} instead of logstash-%{+yyyy.MM.dd}
This .ds index does not appear under Data -> Index Management but the documents can be queried
You can view the .ds-logs-generic index in Kibana, Management> Dev Tools using
GET _cat/indices
To query the documents you can use the _search API
GET /.ds-logs-generic-default-2022.08.28-000001/_search
{
"query": {
"match_all": {}
}
}
If you want to specify the index name you can add it to the output section of your logstash.conf eg index => "logstash-%{+YYYY.MM.dd}"
output {
elasticsearch {
hosts => ["elasticsearch:9200"]
index => "logstash-%{+YYYY.MM.dd}"
}
}
The newly created index will show in Kibana under Management > Data > Index Management. You may need to add a few log lines at the end of your logfile to kick the indexing pipeline.
I'm using Amazon OpenSearch with the engine OpenSearch 1.2.
I was working on setting up APM with the following details
Service 1 - Tomcat Application running on an EC2 server that accesses an RDS database. The server is behind a load balancer with a sub-domain mapped to it.
I added a file - setenv.sh file in the tomcat/bin folder with the following content
#!/bin/sh
export CATALINA_OPTS="$CATALINA_OPTS -javaagent:<PATH_TO_JAVA_AGENT>"
export OTEL_METRICS_EXPORTER=none
export OTEL_EXPORTER_OTLP_ENDPOINT=http://<OTEL_COLLECTOR_SERVER_IP>:4317
export OTEL_RESOURCE_ATTRIBUTES=service.name=<SERVICE_NAME>
export OTEL_INSTRUMENTATION_COMMON_PEER_SERVICE_MAPPING=<RDS_HOST_ENDPOINT>=Database-Service
OTEL Java Agent for collecting traces from the application
OTEL Collector and Data Prepper running on another server with the following configuration
docker-compose.yml
version: "3.7"
services:
data-prepper:
restart: unless-stopped
image: opensearchproject/data-prepper:1
volumes:
- ./pipelines.yaml:/usr/share/data-prepper/pipelines.yaml
- ./data-prepper-config.yaml:/usr/share/data-prepper/data-prepper-config.yaml
networks:
- apm_net
otel-collector:
restart: unless-stopped
image: otel/opentelemetry-collector:0.55.0
command: [ "--config=/etc/otel-collector-config.yml" ]
volumes:
- ./otel-collector-config.yml:/etc/otel-collector-config.yml
ports:
- "4317:4317"
depends_on:
- data-prepper
networks:
- apm_net
data-prepper-config.yaml
ssl: false
otel-collector-config.yml
receivers:
otlp:
protocols:
grpc:
exporters:
otlp/data-prepper:
endpoint: http://data-prepper:21890
tls:
insecure: true
service:
pipelines:
traces:
receivers: [otlp]
exporters: [otlp/data-prepper]
pipelines.yaml
entry-pipeline:
delay: "100"
source:
otel_trace_source:
ssl: false
sink:
- pipeline:
name: "raw-pipeline"
- pipeline:
name: "service-map-pipeline"
raw-pipeline:
source:
pipeline:
name: "entry-pipeline"
prepper:
- otel_trace_raw_prepper:
sink:
- opensearch:
hosts:
[
<AWS OPENSEARCH HOST>,
]
# IAM signing
aws_sigv4: true
aws_region: <AWS_REGION>
index_type: "trace-analytics-raw"
service-map-pipeline:
delay: "100"
source:
pipeline:
name: "entry-pipeline"
prepper:
- service_map_stateful:
sink:
- opensearch:
hosts:
[
<AWS OPENSEARCH HOST>,
]
# IAM signing
aws_sigv4: true
aws_region: <AWS_REGION>
index_type: "trace-analytics-service-map"
The data-prepper is getting authenticated via Fine access based control with all_access role and I'm able to see the otel resources like indexes, index templates generated when running it.
On running the above setup, I'm able to see traces from the application in the Trace Analytics Dashboard of OpenSearch, and upon clicking on the individual traces, I'm able to see a pie chart with one service. I also don't see any errors in the otel-collector as well as in data-prepper. Also, in the logs of data prepper, I see records being sent to otel service map.
However, the services tab of Trace Analytics remains empty and the otel service map index also remains empty.
I have been unable to figure out the reason behind this even after going through the documentation and any help is appreciated!
Context: I want to use XPACK in order to control which user can see which Dasboard only with free version.
I downloaded Kibana 7.7.0 zip from here, installed it and I can see Security options to create users/roles. In fact, I created an index, an user and a role and successfully set the index to this role with this installed Elastic/Kibana in my Windows.
The issue happeans only with Elastic/Kibana started from docker. I started Kibana 7.7.0 from a docker and I can't see Security panel under Management page. Googling I found I must use Basic version Instead of Open Source. As far as I can see, the docker-compose bellow is downloading Basic version since there isn't "sso" at the end. Also I must use installers provided by Elastic instad of Apache. Well, as far as I see it is pulling image not related to Apache.
I am not sure if the issue is only with Kibana since I could enable xpack security on Elastic and run elasticsearch-setup-passwords interactive inside the elastic docker container. I can log ing in Kibana with Elastic user but I don't see Security tab under Management.
Also, I am getting issue from LogStash trying to connect to ElasticSearch even though I set the logstash_system (see logstash.conf bellow).
You can see that I have enabled xpack.security.enabled=true on ElasticSearch.
docker-compose.yml
version: '3.2'
services:
zoo1:
image: elevy/zookeeper:latest
environment:
MYID: 1
SERVERS: zoo1
ports:
- "2181:2181"
kafka1:
image: wurstmeister/kafka
command: [start-kafka.sh]
depends_on:
- zoo1
links:
- zoo1
ports:
- "9092:9092"
environment:
KAFKA_LISTENERS: PLAINTEXT://:9092
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka1:9092
KAFKA_BROKER_ID: 1
KAFKA_ADVERTISED_PORT: 9092
KAFKA_LOG_RETENTION_HOURS: "168"
KAFKA_LOG_RETENTION_BYTES: "100000000"
KAFKA_ZOOKEEPER_CONNECT: zoo1:2181
KAFKA_CREATE_TOPICS: "log:1:1"
KAFKA_AUTO_CREATE_TOPICS_ENABLE: 'true'
filebeat:
image: docker.elastic.co/beats/filebeat:7.7.0
command: filebeat -e -strict.perms=false
volumes:
- "//c/Users/my-comp/docker_folders/filebeat.yml:/usr/share/filebeat/filebeat.yml:ro"
- "//c/Users/my-comp/docker_folders/sample-logs:/sample-logs"
links:
- kafka1
depends_on:
- kafka1
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.7.0
environment:
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- xpack.security.enabled=true
- discovery.type=single-node
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- "//c/Users/my-comp/docker_folders/esdata:/usr/share/elasticsearch/data"
ports:
- "9200:9200"
kibana:
image: docker.elastic.co/kibana/kibana:7.7.0
volumes:
- "//c/Users/my-comp/docker_folders/kibana.yml:/usr/share/kibana/config/kibana.yml"
restart: always
environment:
- SERVER_NAME=kibana.localhost
- ELASTICSEARCH_HOSTS=http://x.x.x.x:9200
ports:
- "5601:5601"
links:
- elasticsearch
depends_on:
- elasticsearch
logstash:
image: docker.elastic.co/logstash/logstash:7.7.0
volumes:
- "//c/Users/my-comp/docker_folders/logstash.conf:/config-dir/logstash.conf"
restart: always
command: logstash -f /config-dir/logstash.conf
ports:
- "9600:9600"
- "7777:7777"
links:
- elasticsearch
- kafka1
kibana.yml
server.name: kibana
server.host: "0"
xpack.monitoring.ui.container.elasticsearch.enabled: false
elasticsearch.ssl.verificationMode: none
elasticsearch.username: "kibana"
elasticsearch.password: "k12345"
logstash.conf
input{
kafka{
codec => "json"
bootstrap_servers => "kafka1:9092"
topics => ["app_logs","request_logs"]
tags => ["myapp"]
}
}
filter {
*** not relevant
}
output {
elasticsearch {
hosts => ["http://x.x.x.x:9200"]
index => "%{[fields][topic_name]}-%{+YYYY.MM.dd}"
user => "logstash_system"
password => "l12345"
}
}
In case it is worth to mention, LogStash is failling to connect to ElasticSearch with this log and, as you can see from logstash.conf I set up logstash_system (the user created from elasticsearch-setup-passwords interactive)
logstash_1 | [2020-05-19T20:18:45,559][WARN ][logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elasticsearch:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :error=>"Got response code '401' contacting Elasticsearch at URL 'http://elasticsearch:9200/'"}
logstash_1 | [2020-05-19T20:19:13,815][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"Got response code '401' contacting Elasticsearch at URL 'http://elasticsearch:9200/_xpack'"}
So, my straight question is: am I missing some extra configuration in order to enable Security on Kibana? Surrounding questions are: is Kibana/Elastic from docker not the same from zip file? Am I missing some extra configurationin order to allow Logstash to connect to ElasticSearch
*** edited
LogStash is still failling to connect to ElasticSearch after I changed to
logstash.conf
...
output {
elasticsearch {
#hosts => [ "${ELASTIC_HOST1}", "${ELASTIC_HOST2}", "${ELASTIC_HOST3}" ]
#hosts => ["http://192.168.99.100:9200"]
index => "%{[fields][topic_name]}-%{+YYYY.MM.dd}"
xpack.monitoring.elasticsearch.hosts: ["http://192.168.99.100:9200"]
xpack.monitoring.elasticsearch.username: "logstash_system"
xpack.monitoring.elasticsearch.password: => "l12345"
}
}
The logs are
logstash_1 | WARNING: All illegal access operations will be denied in a future release
logstash_1 | Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
logstash_1 | [2020-05-20T13:39:05,095][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
logstash_1 | [2020-05-20T13:39:05,120][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.7.0"}
logstash_1 | [2020-05-20T13:39:06,134][WARN ][logstash.monitoringextension.pipelineregisterhook] xpack.monitoring.enabled has not been defined, but found elasticsearch configuration. Please explicitly set `xpack.monitoring.enabled: true` in logstash.yml
logstash_1 | [2020-05-20T13:39:06,150][WARN ][deprecation.logstash.monitoringextension.pipelineregisterhook] Internal collectors option for Logstash monitoring is deprecated and targeted for removal in the next major version.
logstash_1 | Please configure Metricbeat to monitor Logstash. Documentation can be found at:
logstash_1 | https://www.elastic.co/guide/en/logstash/current/monitoring-with-metricbeat.html
logstash_1 | [2020-05-20T13:39:08,008][INFO ][logstash.licensechecker.licensereader] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
logstash_1 | [2020-05-20T13:39:08,408][WARN ][logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elasticsearch:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :error=>"Got response code '401' contacting Elasticsearch at URL 'http://elasticsearch:9200/'"}
logstash_1 | [2020-05-20T13:39:08,506][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"Got response code '401' contacting Elasticsearch at URL 'http://elasticsearch:9200/_xpack'"}
filebeat_1 | 2020-05-20T13:38:53.069Z INFO log/harvester.go:297 Harvester started for file: /sample-logs/request-2019-11-17F.log
logstash_1 | [2020-05-20T13:39:08,611][ERROR][logstash.monitoring.internalpipelinesource] Failed to fetch X-Pack information from Elasticsearch. This is likely due to failure to reach a live Elasticsearch cluster.
logstash_1 | [2020-05-20T13:39:11,449][ERROR][logstash.agent ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::ConfigurationError", :message=>"Expected one of [A-Za-z0-9_-], [ \\t\\r\\n], \"#\", \"=>\" at line 86, column 7 (byte 2771) after output {\r\n elasticsearch {\r\n #hosts => [ \"${ELASTIC_HOST1}\", \"${ELASTIC_HOST2}\", \"${ELASTIC_HOST3}\" ]\r\n\t#hosts => [\"http://192.168.99.100:9200\"]\r\n index => \"%{[fields][topic_name]}-%{+YYYY.MM.dd}\"\r\n\txpack", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:58:in `compile_imperative'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:66:in `compile_graph'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:28:in `block in compile_sources'", "org/jruby/RubyArray.java:2577:in `map'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:27:in `compile_sources'", "org/logstash/execution/AbstractPipelineExt.java:181:in `initialize'", "org/logstash/execution/JavaBasePipelineExt.java:67:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:43:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:52:in `execute'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:342:in `block in converge_state'"]}
I guess the most relevant part of this log is:
logstash_1 | [2020-05-20T13:39:08,008][INFO ][logstash.licensechecker.licensereader] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
logstash_1 | [2020-05-20T13:39:08,408][WARN ][logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elasticsearch:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :error=>"Got response code '401' contacting Elasticsearch at URL 'http://elasticsearch:9200/'"}
logstash_1 | [2020-05-20T13:39:08,506][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"Got response code '401' contacting Elasticsearch at URL 'http://elasticsearch:9200/_xpack'"}
Take a note it is failling with ""Got response code '401' contacting Elasticsearch at URL 'http://elasticsearch:9200/_xpack'" error. I guess that in my particular docker setups it demands to be the Docker Machine IP which in my case is 192.168.99.100. Is there someway to replace elasticsearch by this IP?
I have a logstash instance running with docker-compose on an AWS EC2 (AMI) instance. I have mounted a folder as volume to the container. I have logstash pipeline config to write the sincedb file in the mounted folder. The pipeline runs but it doesn't write anything for the sincedb file.
The same configuration works on my local machine, but not on EC2. I have checked that the user has rights to write in the folder by creating a file there (eg: vi test).
Docker compose config:
version: "2"
services:
logstash:
image: docker.elastic.co/logstash/logstash:7.2.0
volumes:
- ./logstash/pipeline/:/usr/share/logstash/pipeline/
- ./logstash/settings/logstash.yml:/usr/share/logstash/config/logstash.yml
- ../data/:/usr/data/:rw
- ./logstash/templates/:/usr/share/logstash/templates/
container_name: logstash
ports:
- 9600:9600
env_file:
- ../env/.env.logstash
Logstash input:
input{
s3 {
access_key_id => "${AWS_ACCESS_KEY}"
bucket => "xyz-bucket"
secret_access_key => "${AWS_SECRET_KEY}"
region => "eu-west-1"
prefix => "logs/"
type => "log"
codec => "json"
sincedb_path => "/usr/data/log-sincedb.file"
}
}
I've fixed this. Had to explicitly add user:root in the docker-compose service config.
version: "2"
services:
logstash:
image: docker.elastic.co/logstash/logstash:7.2.0
user: root
volumes:
- ./logstash/pipeline/:/usr/share/logstash/pipeline/
- ./logstash/settings/logstash.yml:/usr/share/logstash/config/logstash.yml
- ../data/:/usr/data/:rw
- ./logstash/templates/:/usr/share/logstash/templates/
container_name: logstash
ports:
- 9600:9600
env_file:
- ../env/.env.logstash
I'm setting up Filebeat to send logs to Elasticsearch. This is my filebeat.yml:
filebeat.prospectors:
- type: log
paths:
- '/var/log/project/*.log'
json.message_key: message
output.elasticsearch:
hosts: ["localhost:9200"]
I have this file /var/log/project/test.log with this content:
{ "message": "This is a test" }
and I was expecting this log to be sent to Elasticsearch. Elasticsearch is running in a Docker container in localhost at 9200.
When I run filebeat (Docker), no index is created in Elasticsearch. So, in Kibana, I don't see any data.
Why is that? Isn't supposed that Filebeat creates index automatically?
Solved! I wasn't sharing logs dir between host and Filebeat container, so there wasn't logs to send.
I added a volume when run Filebeat:
docker run -it -v $(pwd)/filebeat.yml:/usr/share/filebeat/filebeat.yml -v /var/log/project/:/var/log/project/ docker.elastic.co/beats/filebeat:6.4.0
you can create index as below
output.elasticsearch:
hosts: ["localhost:9200"]
index: "test-%{+yyyy.MM.dd}"