logstash sincedb file not created with docker-compose - amazon-ec2

I have a logstash instance running with docker-compose on an AWS EC2 (AMI) instance. I have mounted a folder as volume to the container. I have logstash pipeline config to write the sincedb file in the mounted folder. The pipeline runs but it doesn't write anything for the sincedb file.
The same configuration works on my local machine, but not on EC2. I have checked that the user has rights to write in the folder by creating a file there (eg: vi test).
Docker compose config:
version: "2"
services:
logstash:
image: docker.elastic.co/logstash/logstash:7.2.0
volumes:
- ./logstash/pipeline/:/usr/share/logstash/pipeline/
- ./logstash/settings/logstash.yml:/usr/share/logstash/config/logstash.yml
- ../data/:/usr/data/:rw
- ./logstash/templates/:/usr/share/logstash/templates/
container_name: logstash
ports:
- 9600:9600
env_file:
- ../env/.env.logstash
Logstash input:
input{
s3 {
access_key_id => "${AWS_ACCESS_KEY}"
bucket => "xyz-bucket"
secret_access_key => "${AWS_SECRET_KEY}"
region => "eu-west-1"
prefix => "logs/"
type => "log"
codec => "json"
sincedb_path => "/usr/data/log-sincedb.file"
}
}

I've fixed this. Had to explicitly add user:root in the docker-compose service config.
version: "2"
services:
logstash:
image: docker.elastic.co/logstash/logstash:7.2.0
user: root
volumes:
- ./logstash/pipeline/:/usr/share/logstash/pipeline/
- ./logstash/settings/logstash.yml:/usr/share/logstash/config/logstash.yml
- ../data/:/usr/data/:rw
- ./logstash/templates/:/usr/share/logstash/templates/
container_name: logstash
ports:
- 9600:9600
env_file:
- ../env/.env.logstash

Related

Logstash pipeline not showing on Kibana, but logs show Pipelines running

Trying to set up elastic search, kibana and logstash to read logs from local folder.
It works well on version 7.x.x, but when I try to upgrade to 8 it doesn't.Fx
I am using this YAML file:
version: '3.6'
services:
Elasticsearch:
image: elasticsearch:8.4.0
container_name: elasticsearch
volumes:
- elastic_data:/usr/share/elasticsearch/data/
environment:
- discovery.type=single-node
- xpack.license.self_generated.type=basic
- xpack.security.enabled=false
ports:
- '9200:9200'
- '9300:9300'
networks:
- elk
Logstash:
image: logstash:8.4.0
container_name: logstash
environment:
- ELASTICSEARCH_HOSTS=http://elasticsearch:9200
- xpack.monitoring.enabled=true
volumes:
- ./logstash/:/logstash
- D:/test/Logs/:/test/Logs
command: logstash -f /logstash/logstash.conf
depends_on:
- Elasticsearch
ports:
- '9600:9600'
networks:
- elk
Kibana:
image: kibana:8.4.0
container_name: kibana
ports:
- '5601:5601'
environment:
- ELASTICSEARCH_URL=http://elasticsearch:9200
depends_on:
- Elasticsearch
networks:
- elk
volumes:
elastic_data: {}
networks:
elk:
and config for logstash:
input {
file {
path => "/test/Logs/test.slog"
start_position => "beginning"
}
}
output {
elasticsearch {
hosts => ["elasticsearch:9200"]
}
}
test.slog exist and contain logs.
the logstash docker show the following logs:
[2022-08-27T20:40:32,592][INFO ][logstash.outputs.elasticsearch][main] Installing Elasticsearch template {:name=>"ecs-logstash"}
[2022-08-27T20:40:33,450][INFO ][logstash.javapipeline ][main] Pipeline Java execution initialization time {"seconds"=>0.95}
[2022-08-27T20:40:33,451][INFO ][logstash.javapipeline ][.monitoring-logstash] Pipeline Java execution initialization time {"seconds"=>0.94}
[2022-08-27T20:40:33,516][INFO ][logstash.javapipeline ][.monitoring-logstash] Pipeline started {"pipeline.id"=>".monitoring-logstash"}
[2022-08-27T20:40:33,532][INFO ][logstash.inputs.file ][main] No sincedb_path set, generating one based on the "path" setting {:sincedb_path=>"/usr/share/logstash/data/plugins/inputs/file/.sincedb_327fd1919fa26d08ec354604c3e1a1ce", :path=>["/test/Logs/test.slog"]}
[2022-08-27T20:40:33,559][INFO ][logstash.javapipeline ][main] Pipeline started {"pipeline.id"=>"main"}
[2022-08-27T20:40:33,614][INFO ][filewatch.observingtail ][main][8992bf4e2fad9d8838262d3019319d02ab5ffdcb5b282e821574485618753ce9] START, creating Discoverer, Watch with file and sincedb collections
[2022-08-27T20:40:33,625][INFO ][logstash.agent ] Pipelines running {:count=>2, :running_pipelines=>[:".monitoring-logstash", :main], :non_running_pipelines=>[]}
But when I go to the Data -> Index Management there is nothing. and also in the Ingest pipeline.
What am I doing wrong?
In Elasticsearch 8 the index names created by logstash output follow the pattern .ds-logs-generic-default-%{+yyyy.MM.dd} instead of logstash-%{+yyyy.MM.dd}
This .ds index does not appear under Data -> Index Management but the documents can be queried
You can view the .ds-logs-generic index in Kibana, Management> Dev Tools using
GET _cat/indices
To query the documents you can use the _search API
GET /.ds-logs-generic-default-2022.08.28-000001/_search
{
"query": {
"match_all": {}
}
}
If you want to specify the index name you can add it to the output section of your logstash.conf eg index => "logstash-%{+YYYY.MM.dd}"
output {
elasticsearch {
hosts => ["elasticsearch:9200"]
index => "logstash-%{+YYYY.MM.dd}"
}
}
The newly created index will show in Kibana under Management > Data > Index Management. You may need to add a few log lines at the end of your logfile to kick the indexing pipeline.

How to run container of beat that required authentication from Elasticsearch

The main purpose: I want to use Logstash for collecting logs files that rely on remote server.
My ELK stack were created by using docker-compose.yml
version: '3.3'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.5.1
ports:
- "9200:9200"
- "9300:9300"
volumes:
- '/share/elk/elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml:ro'
environment:
ES_JAVA_OPTS: "-Xmx512m -Xms256m"
ELASTIC_PASSWORD: changeme
discovery.type: single-node
networks:
- elk
deploy:
mode: replicated
replicas: 1
logstash:
image: docker.elastic.co/logstash/logstash:7.5.1
ports:
- "5000:5000"
- "9600:9600"
volumes:
- '/share/elk/logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml:ro'
- '/share/elk/logstash/pipeline/logstash.conf:/usr/share/logstash/pipeline/logstash.conf:ro'
environment:
LS_JAVA_OPTS: "-Xmx512m -Xms256m"
networks:
- elk
deploy:
mode: replicated
replicas: 1
kibana:
image: docker.elastic.co/kibana/kibana:7.5.1
ports:
- "5601:5601"
volumes:
- '/share/elk/kibana/config/kibana.yml:/usr/share/kibana/config/kibana.yml:ro'
networks:
- elk
deploy:
mode: replicated
replicas: 1
networks:
elk:
driver: overlay
and then I want to install a filebeat at the target host in order to send log to the ELK host.
docker run docker.elastic.co/beats/filebeat-oss:7.5.1 setup \
-E setup.kibana.host=x.x.x.x:5601 \
-E ELASTIC_PASSWORD="changeme" \
-E output.elasticsearch.hosts=["x.x.x.x:9200"]
but once hit the enter, the error occurs
Exiting: Couldn't connect to any of the configured Elasticsearch hosts. Errors: [Error connection to Elasticsearch http://x.x.x.x:9200: 401 Unauthorized: {"error":{"root_cause":[{"type":"security_exception","reason":"missing authentication credentials for REST request [/]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}}],"type":"security_exception","reason":"missing authentication credentials for REST request [/]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}},"status":401}]
Also tried with -E ELASTICS_USERNAME="elastic" the error still persists
You should disable the basic x-pack security which is by default enabled in Elasticsearch 7.X version, under environment variable of ES docker image, mentioned below and start ES docker container.
xpack.security.enabled : false
After this, no need to pass ES creds and you can also remove below from your ES env. var:
ELASTIC_PASSWORD: changeme

BadResponseCodeError, :error=>"Got response code '401' contacting Elasticsearch at URL

I use logstash by logstash:7.9.1 image and i get this error when I up docker-compose and I dont know what to do with this (I try to make my logstash config wrong and connect it to the wrong elastic port but my docker still connect to 9200 and so I think it dosent read its data from my logstash config) pls help meeeee!!!!
my error:
[logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elasticsearch:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :error=>"Got response code '401' contacting Elasticsearch at URL 'http://elasticsearch:9200/'"}
my docker-compose:
zookeeper:
image: wurstmeister/zookeeper:3.4.6
container_name: zookeeper
ports:
- 2181:2181
networks:
- bardz
kafka:
image: wurstmeister/kafka:2.11-1.1.0
container_name: kafka
depends_on:
- zookeeper
environment:
KAFKA_ADVERTISED_HOST_NAME: kafka
KAFKA_CREATE_TOPICS: logs-topic:1:1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
ports:
- 9092:9092
volumes:
- kofka-volume:/var/run/docker.sock
networks:
- bardz
elasticsearch:
build:
context: elk/elasticsearch/
args:
ELK_VERSION: "7.9.1"
volumes:
- type: bind
source: ./elk/elasticsearch/config/elasticsearch.yml
target: /usr/share/elasticsearch/config/elasticsearch.yml
read_only: true
- type: volume
source: elasticsearch
target: /usr/share/elasticsearch/data
ports:
- "9200:9200"
- "9300:9300"
environment:
ES_JAVA_OPTS: "-Xmx256m -Xms256m"
ELASTIC_PASSWORD: changeme
# Use single node discovery in order to disable production mode and avoid bootstrap checks
# see https://www.elastic.co/guide/en/elasticsearch/reference/current/bootstrap-checks.html
discovery.type: single-node
networks:
- bardz
logstash:
image: logstash:7.9.1
restart: on-failure
ports:
- "5000:5000/tcp"
- "5000:5000/udp"
- "9600:9600"
volumes:
- logstash_data:/bitnami
- ./elk/logstash/logstash-kafka.conf:/opt/bitnami/logstash/config/logstash-kafka.conf
environment:
LOGSTASH_CONF_FILENAME: logstash-kafka.conf
networks:
- bardz
depends_on:
- elasticsearch
networks:
bardz:
external: true
driver: bridge
volumes:
elasticsearch:
zipkin-volume:
kofka-volume:
logstash_data:
my logstash config:
input {
kafka {
bootstrap_servers => "kafka:9092"
topics => ["logs-topic"]
}
}
output {
elasticsearch {
hosts => ["elasticsearch:9200"]
user => elastic
password => changeme
index => "logs-topic"
workers => 1
}
}
You are using the wrong password of elastic user in 7.9 which is changed from changeme to password as shown in ES contribution doc, but I tried and this seems to work only when you are running ES from source code.
Anyway you are getting 401 means unauth access and you can read more about it here,
As you are not running ES code from source, would advise you to follow the steps mentioned in this thread to change the password and as you are running it in docker, you need to go inside the docker conatainer by docker exec -it <cont-id> /bin/bash and than run the command mentioned in thread to set your own password.

docker-compose: how to use minio in- and outside of the docker network

I have the following docker-compose.yml to run a local environment for my Laravel App.
version: '3'
services:
app:
build:
context: .
dockerfile: .docker/php/Dockerfile
ports:
- 80:80
- 443:443
volumes:
- .:/var/www:delegated
environment:
AWS_ACCESS_KEY_ID: minio_access_key
AWS_SECRET_ACCESS_KEY: minio_secret_key
AWS_BUCKET: Bucket
AWS_ENDPOINT: http://s3:9000
links:
- database
- s3
database:
image: mariadb:10.3
ports:
- 63306:3306
environment:
MYSQL_ROOT_PASSWORD: secret
s3:
image: minio/minio
ports:
- "9000:9000"
volumes:
- ./storage/minio:/data
environment:
MINIO_ACCESS_KEY: minio_access_key
MINIO_SECRET_KEY: minio_secret_key
command: server /data
As you can see, I use minio as AWS S3 compatible storage. This works very well but when I generate a url for a file (Storage::disk('s3')->url('some-file.txt')) obviously I get a url like this http://s3:9000/Bucket/some-file.txt which does not work outside of the Docker network.
I've already tried to set AWS_ENDPOINT to http://127.0.0.1:9000 but then Laravel can't connect to the Minio Server...
Is there a way to configure Docker / Laravel / Minio to generate urls which are accessible in- and outside of the Docker network?
how about binding address? (not tested)
...
s3:
image: minio/minio
ports:
- "9000:9000"
volumes:
- ./storage/minio:/data
environment:
MINIO_ACCESS_KEY: minio_access_key
MINIO_SECRET_KEY: minio_secret_key
command: server --address 0.0.0.0:9000 /data
I expanded on the solutions in this question to create a solution that is working for me on both a localhost and on a server with an accessible dns.
The localhost solution is essentially the solution described above.
Create localhost host mapping
sudo echo "127.0.0.1 my-minio-localhost-alias" >> /etc/hosts
Set HOSTNAME, use 'my-minio-localhost-alias' for localhost
export HOSTNAME=my-minio-localhost-alias
Create hello.txt
Hello from Minio!
Create docker-compose.yml
This compose file contains the following containers:
minio: minio service
minio-mc: command line tool to initialize content
s3-client: command line tool to generate presigned urls
version: '3.7'
networks:
mynet:
services:
minio:
container_name: minio
image: minio/minio
ports:
- published: 9000
target: 9000
command: server /data
networks:
mynet:
aliases:
# For localhost access, add the following to your /etc/hosts
# 127.0.0.1 my-minio-localhost-alias
# When accessing the minio container on a server with an accessible dns, use the following
- ${HOSTNAME}
# When initializing the minio container for the first time, you will need to create an initial bucket named my-bucket.
minio-mc:
container_name: minio-mc
image: minio/mc
depends_on:
- minio
volumes:
- "./hello.txt:/tmp/hello.txt"
networks:
mynet:
s3-client:
container_name: s3-client
image: amazon/aws-cli
environment:
AWS_ACCESS_KEY_ID: minioadmin
AWS_SECRET_ACCESS_KEY: minioadmin
depends_on:
- minio
networks:
mynet:
Start the minio container
docker-compose up -d minio
Create a bucket in minio and load a file
docker-compose run minio-mc mc config host add docker http://minio:9000 minioadmin minioadmin
docker-compose run minio-mc mb docker/my-bucket
docker-compose run minio-mc mc cp /tmp/hello.txt docker/my-bucket/foo.txt
Create a presigned URL that is accessible inside AND outside of the docker network
docker-compose run s3-client --endpoint-url http://${HOSTNAME}:9000 s3 presign s3://my-bucket/hello.txt
Since you are mapping the 9000 port on host to that service, you should be able to access it via s3:9000 if you simply add s3 to your hosts file (/etc/hosts on Mac/Linux)
Add this 127.0.0.1 s3 to your hosts file and you should be able to access the s3 container from your host machine by using https://s3:9000/path/to/file
This means you can use the s3 hostname from inside and outside the docker network
I didn't find a complete setup of minio using docker-compose. here it is:
version: '2.4'
services:
s3:
image: minio/minio:latest
ports:
- "9000:9000"
- "9099:9099"
environment:
MINIO_ROOT_USER: minioadmin
MINIO_ROOT_PASSWORD: minioadmin
volumes:
- storage-minio:/data
command: server --address ":9099" --console-address ":9000" /data
restart: always # necessary since it's failing to start sometimes
volumes:
storage-minio:
external: true
In command section we have the address which is the API address and we have console-address where you can connect to the console see the image below. Use to the MINIO_ROOT_USER & MINIO_ROOT_PASSWORD values to sign in.
Adding the "s3" alias to my local hosts file did not do the trick. But explicitly binding the ports to 127.0.0.1 worked like a charm:
s3:
image: minio/minio:RELEASE.2022-02-05T04-40-59Z
restart: "unless-stopped"
volumes:
- s3data:/data
environment:
MINIO_ROOT_USER: minio
MINIO_ROOT_PASSWORD: minio123
# Allow all incoming hosts to access the server by using 0.0.0.0
command: server --address 0.0.0.0:9000 --console-address ":9001" /data
ports:
# Bind explicitly to 127.0.0.1
- "127.0.0.1:9000:9000"
- "9001:9001"
healthcheck:
test: ["CMD", "curl", "-f", "http://127.0.0.1:9000/minio/health/live"]
interval: 30s
timeout: 20s
retries: 3
For those who are looking for s3 with minio object server integration test. Specially for JAVA implementation.
docker-compose file:
version: '3.7'
services:
minio-service:
image: quay.io/minio/minio
command: minio server /data
ports:
- "9000:9000"
environment:
MINIO_ROOT_USER: minio
MINIO_ROOT_PASSWORD: minio123
The actual IntegrationTest class:
import com.amazonaws.auth.AWSStaticCredentialsProvider;
import com.amazonaws.auth.BasicAWSCredentials;
import com.amazonaws.client.builder.AwsClientBuilder;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.S3Object;
import org.junit.jupiter.api.*;
import org.testcontainers.containers.DockerComposeContainer;
import java.io.File;
#TestInstance(TestInstance.Lifecycle.PER_CLASS)
class MinioIntegrationTest {
private static final DockerComposeContainer minioContainer = new DockerComposeContainer<>(new File("src/test/resources/docker-compose.yml"))
.withExposedService("minio-service", 9000);
private static final String MINIO_ENDPOINT = "http://localhost:9000";
private static final String ACCESS_KEY = "minio";
private static final String SECRET_KEY = "minio123";
private AmazonS3 s3Client;
#BeforeAll
void setupMinio() {
minioContainer.start();
initializeS3Client();
}
#AfterAll
void closeMinio() {
minioContainer.close();
}
private void initializeS3Client() {
String name = Regions.US_EAST_1.getName();
AwsClientBuilder.EndpointConfiguration endpoint = new AwsClientBuilder.EndpointConfiguration(MINIO_ENDPOINT, name);
s3Client = AmazonS3ClientBuilder.standard()
.withCredentials(new AWSStaticCredentialsProvider(new BasicAWSCredentials(ACCESS_KEY, SECRET_KEY)))
.withEndpointConfiguration(endpoint)
.withPathStyleAccessEnabled(true)
.build();
}
#Test
void shouldReturnActualContentBasedOnBucketName() throws Exception{
String bucketName = "test-bucket";
String key = "s3-test";
String content = "Minio Integration test";
s3Client.createBucket(bucketName);
s3Client.putObject(bucketName, key, content);
S3Object object = s3Client.getObject(bucketName, key);
byte[] actualContent = new byte[22];
object.getObjectContent().read(actualContent);
Assertions.assertEquals(content, new String(actualContent));
}
}

Using a shared MySQL container

Tl;Dr; Trying to get WordPress docker-compose container to talk to another docker-compose container.
On my Mac I have a WordPress & MySQL container which I have built and configured with a linked MySQL server. In production I plan to use a Google Cloud MySQL storage instance, so plan on removing the MySQL container from the docker-compose file (unlinking it) and then separate shared container I can use from multiple docker containers.
The issue I'm having is that I cant connect the WordPress container to the separate MySQL container. Would anyone be able to shed any light on how I might go about this?
I have tried unsuccessfully to create a network as well as tried creating a fixed IP that the local box has reference to via the /etc/hosts file (my preferred configuration as I can update the file according to ENV)
WP:
version: '2'
services:
wordpress:
container_name: spmfrontend
hostname: spmfrontend
domainname: spmfrontend.local
image: wordpress:latest
restart: always
ports:
- 8080:80
# creates an entry in /etc/hosts
extra_hosts:
- "ic-mysql.local:172.20.0.1"
# Sets up the env, passwords etc
environment:
WORDPRESS_DB_HOST: ic-mysql.local:9306
WORDPRESS_DB_USER: root
WORDPRESS_DB_PASSWORD: root
WORDPRESS_DB_NAME: wordpress
WORDPRESS_TABLE_PREFIX: spm
# sets the working directory
working_dir: /var/www/html
# creates a link to the volume local to the file
volumes:
- ./wp-content:/var/www/html/wp-content
# Any networks the container should be associated with
networks:
default:
external:
name: ic-network
MySQL:
version: '2'
services:
mysql:
container_name: ic-mysql
hostname: ic-mysql
domainname: ic-mysql.local
restart: always
image: mysql:5.7
ports:
- 9306:3306
# Create a static IP for the container
networks:
ipv4_address: 172.20.0.1
# Sets up the env, passwords etc
environment:
MYSQL_ROOT_PASSWORD: root # TODO: Change this
MYSQL_USER: root
MYSQL_PASS: root
MYSQL_DATABASE: wordpress
# saves /var/lib/mysql to persistant volume
volumes:
- perstvol:/var/lib/mysql
- backups:/backups
# creates a volume to persist data
volumes:
perstvol:
backups:
# Any networks the container should be associated with
networks:
default:
external:
name: ic-network
What you probably want to do is create a shared Docker network for the two containers to use, and point them both to it. You can create a network using docker network create <name>. I will use sharednet as an example below, but you can use any name you like.
Once the network is there, you can point both containers to it. When you're using docker-compose, you would do this at the bottom of your YAML file. This would go at the top level of the file, i.e. all the way to the left, like volumes:.
networks:
default:
external:
name: sharednet
To do the same thing on a normal container (outside compose), you can pass the --network argument.
docker run --network sharednet [ ... ]

Resources