Nginx cannot connect fluentd in EFK stack - elasticsearch

I am setting up a stack with an application consisting of nginx, redis, mysql, myapp. Nginx proxies requests to myapp. I want to send logs from nginx to EFK stack, but an error occurs when starting the nginx service:
Error response from daemon: dial tcp 127.0.0.1:24224: connect: connection refused
docker-compose.yml for stack with myapp
version: "3.8"
services:
nginx:
image: nginx:alpine
deploy:
mode: replicated
replicas: 2
labels:
- traefik.enable=true
- traefik.http.routers.node1.rule=Host(`${NODE1}`)
- traefik.http.routers.node1.service=nginx
- traefik.http.routers.node2.rule=Host(`${NODE2}`)
- traefik.http.routers.node2.service=nginx
- traefik.http.routers.node3.rule=Host(`${NODE3}`)
- traefik.http.routers.node3.service=nginx
- traefik.http.services.nginx.loadbalancer.server.port=80
placement:
constraints:
- node.role == manager
logging:
driver: fluentd
options:
fluentd-address: localhost:24224
tag: nginx-
volumes:
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf
ports:
- 80:80
depends_on:
- myapp
networks:
- traefik-public
...
All stacks are in the same traefik-public network, if you make ping fluentd from any container, fluentd responds
Part of efk.yml
version: "3.7"
services:
fluentd:
image: registry.rebrainme.com/docker_users_repos/3912/dkr-30-voting/fluentd
deploy:
mode: global
volumes:
- /mnt/fluent.conf:/fluentd/etc/fluent.conf
ports:
- "24224:24224"
- "24224:24224/udp"
depends_on:
- elasticsearch
- kibana
networks:
- traefik-public
...
fluent.conf
<source>
#type forward
port 24224
bind localhost
</source>
<match *.**>
#type copy
<store>
#type elasticsearch
host elasticsearch
port 9200
logstash_format true
logstash_prefix fluentd
logstash_dateformat %Y%m%d
include_tag_key true
type_name access_log
tag_key #log_name
flush_interval 1s
</store>
<store>
#type stdout
</store>
</match>
I ask for help

Tldr;
Because you are using 2 compose files. docker-compose.yml and efk.yml They are not sharing the same value for the network.
To Fix
Combine both file in a single one.
To Fix (with still 2 files separated)
You should first off all create a network.
docker network create traefik-public
Then update both compose file with
networks:
default:
external:
name: traefik-public
This should make it work.

In order for there to be a connection between the efk stack and the application stack, it is necessary to bind the fluentd port to the host port
version: "3.7"
services:
fluentd:
image: my_fluentd_image:latest
deploy:
mode: global
configs:
- source: fluent-conf
target: /fluentd/etc/fluent.conf
ports:
- target: 24224
published: 24224
protocol: tcp
mode: host
depends_on:
- elasticsearch
- kibana
networks:
- traefik-public

Related

Could not communicate to Elasticsearch

I am trying to send my node app logs to fluentd to elasticsearch to kibana, but having a problem connecting fluentd with elasticsearch with docker. I want to dockerize this efk stack.
I have attached the folder structure and shared relevant files.
Following is the folder structure:
Error
Could not communicate to Elasticsearch, resetting connection and trying again. Connection refused - connect(2) for 172.20.0.2:9200 (Errno::ECONNREFUSED)
fluent.conf:
#type forward
port 24224
bind 0.0.0.0
</source>
<match *.**>
#type copy
<store>
#type elasticsearch
host elasticsearch
port 9200
user elastic
password pass
</store>
</match>
DockerFile
FROM fluent/fluentd:v1.15-1
USER root
RUN gem install elasticsearch -v 7.6.0
# RUN gem install fluent-plugin-elasticsearch -v 7.6.0
RUN gem install fluent-plugin-elasticsearch -v 4.1.1
RUN gem install fluent-plugin-rewrite-tag-filter
RUN gem install fluent-plugin-multi-format-parser
USER fluent
Docker-compose.yml
version: '3'
services:
fluentd:
build: ./fluentd
container_name: loggingFluent
volumes:
- ./fluentd/conf:/fluentd/etc
# - ./fluentd/conf/fluent.conf:/fluentd/etc/fluent.conf
ports:
- "24224:24224"
- "24224:24224/udp"
links:
- elasticsearch
depends_on:
- elasticsearch
- kibana
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.8.1
container_name: elasticsearch-Logging
ports:
- 9200:9200
expose:
- 9200
environment:
discovery.type: 'single-node'
ES_JAVA_OPTS: '-Xms1024m -Xmx1024m'
xpack.security.enabled: 'true'
ELASTIC_PASSWORD: 'pass'
kibana:
image: docker.elastic.co/kibana/kibana:7.8.1
container_name: kibana-Logging
volumes:
- ./kibana.yml:/usr/share/kibana/config/kibana.yml
ports:
- 5601:5601
depends_on:
- elasticsearch
links:
- elasticsearch
Maybe I am missing some with docker networking because I am using docker for the first time I have checked the ports exposed by docker containers and they are fine. I have done this without docker and have used the same settings but having a problem doing it with docker. Looking forward to seeing your responses. Thank you very much.
Adding a username to the elastic search environment solved the issue :
elasticsearch-environment
environment:
discovery.type: 'single-node'
ES_JAVA_OPTS: '-Xms1024m -Xmx1024m'
xpack.security.enabled: 'true'
ELASTIC_PASSWORD: 'pass'
ELASTIC_USERNAME: 'elastic'

How to run container of beat that required authentication from Elasticsearch

The main purpose: I want to use Logstash for collecting logs files that rely on remote server.
My ELK stack were created by using docker-compose.yml
version: '3.3'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.5.1
ports:
- "9200:9200"
- "9300:9300"
volumes:
- '/share/elk/elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml:ro'
environment:
ES_JAVA_OPTS: "-Xmx512m -Xms256m"
ELASTIC_PASSWORD: changeme
discovery.type: single-node
networks:
- elk
deploy:
mode: replicated
replicas: 1
logstash:
image: docker.elastic.co/logstash/logstash:7.5.1
ports:
- "5000:5000"
- "9600:9600"
volumes:
- '/share/elk/logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml:ro'
- '/share/elk/logstash/pipeline/logstash.conf:/usr/share/logstash/pipeline/logstash.conf:ro'
environment:
LS_JAVA_OPTS: "-Xmx512m -Xms256m"
networks:
- elk
deploy:
mode: replicated
replicas: 1
kibana:
image: docker.elastic.co/kibana/kibana:7.5.1
ports:
- "5601:5601"
volumes:
- '/share/elk/kibana/config/kibana.yml:/usr/share/kibana/config/kibana.yml:ro'
networks:
- elk
deploy:
mode: replicated
replicas: 1
networks:
elk:
driver: overlay
and then I want to install a filebeat at the target host in order to send log to the ELK host.
docker run docker.elastic.co/beats/filebeat-oss:7.5.1 setup \
-E setup.kibana.host=x.x.x.x:5601 \
-E ELASTIC_PASSWORD="changeme" \
-E output.elasticsearch.hosts=["x.x.x.x:9200"]
but once hit the enter, the error occurs
Exiting: Couldn't connect to any of the configured Elasticsearch hosts. Errors: [Error connection to Elasticsearch http://x.x.x.x:9200: 401 Unauthorized: {"error":{"root_cause":[{"type":"security_exception","reason":"missing authentication credentials for REST request [/]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}}],"type":"security_exception","reason":"missing authentication credentials for REST request [/]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}},"status":401}]
Also tried with -E ELASTICS_USERNAME="elastic" the error still persists
You should disable the basic x-pack security which is by default enabled in Elasticsearch 7.X version, under environment variable of ES docker image, mentioned below and start ES docker container.
xpack.security.enabled : false
After this, no need to pass ES creds and you can also remove below from your ES env. var:
ELASTIC_PASSWORD: changeme

BadResponseCodeError, :error=>"Got response code '401' contacting Elasticsearch at URL

I use logstash by logstash:7.9.1 image and i get this error when I up docker-compose and I dont know what to do with this (I try to make my logstash config wrong and connect it to the wrong elastic port but my docker still connect to 9200 and so I think it dosent read its data from my logstash config) pls help meeeee!!!!
my error:
[logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elasticsearch:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :error=>"Got response code '401' contacting Elasticsearch at URL 'http://elasticsearch:9200/'"}
my docker-compose:
zookeeper:
image: wurstmeister/zookeeper:3.4.6
container_name: zookeeper
ports:
- 2181:2181
networks:
- bardz
kafka:
image: wurstmeister/kafka:2.11-1.1.0
container_name: kafka
depends_on:
- zookeeper
environment:
KAFKA_ADVERTISED_HOST_NAME: kafka
KAFKA_CREATE_TOPICS: logs-topic:1:1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
ports:
- 9092:9092
volumes:
- kofka-volume:/var/run/docker.sock
networks:
- bardz
elasticsearch:
build:
context: elk/elasticsearch/
args:
ELK_VERSION: "7.9.1"
volumes:
- type: bind
source: ./elk/elasticsearch/config/elasticsearch.yml
target: /usr/share/elasticsearch/config/elasticsearch.yml
read_only: true
- type: volume
source: elasticsearch
target: /usr/share/elasticsearch/data
ports:
- "9200:9200"
- "9300:9300"
environment:
ES_JAVA_OPTS: "-Xmx256m -Xms256m"
ELASTIC_PASSWORD: changeme
# Use single node discovery in order to disable production mode and avoid bootstrap checks
# see https://www.elastic.co/guide/en/elasticsearch/reference/current/bootstrap-checks.html
discovery.type: single-node
networks:
- bardz
logstash:
image: logstash:7.9.1
restart: on-failure
ports:
- "5000:5000/tcp"
- "5000:5000/udp"
- "9600:9600"
volumes:
- logstash_data:/bitnami
- ./elk/logstash/logstash-kafka.conf:/opt/bitnami/logstash/config/logstash-kafka.conf
environment:
LOGSTASH_CONF_FILENAME: logstash-kafka.conf
networks:
- bardz
depends_on:
- elasticsearch
networks:
bardz:
external: true
driver: bridge
volumes:
elasticsearch:
zipkin-volume:
kofka-volume:
logstash_data:
my logstash config:
input {
kafka {
bootstrap_servers => "kafka:9092"
topics => ["logs-topic"]
}
}
output {
elasticsearch {
hosts => ["elasticsearch:9200"]
user => elastic
password => changeme
index => "logs-topic"
workers => 1
}
}
You are using the wrong password of elastic user in 7.9 which is changed from changeme to password as shown in ES contribution doc, but I tried and this seems to work only when you are running ES from source code.
Anyway you are getting 401 means unauth access and you can read more about it here,
As you are not running ES code from source, would advise you to follow the steps mentioned in this thread to change the password and as you are running it in docker, you need to go inside the docker conatainer by docker exec -it <cont-id> /bin/bash and than run the command mentioned in thread to set your own password.

Configure an ELK cluster in docker containers

I'm trying to configure an ELK cluster using 2 docker containers.
I'm using the following image:
es241_l240_k461: Elasticsearch 2.4.1, Logstash 2.4.0, and Kibana 4.6.1. (reference: https://hub.docker.com/r/sebp/elk/ )
I have created 2 docker containers for that image with docker-compose; each works perfectly in standalone mode.
I want to link the 2 ELK nodes between each other in way to create the cluster, but I haven't found a proper solution.
The Elasticsearch node in container1 doesn't communicate with the Elasticsearch node in container2.
These are the two docker-compose.yml:
CONTAINER1:
version: '2'
services:
elasticsearch01:
image: sebp/elk:es241_l240_k461
ports:
- "5601:5601"
- "9200:9200"
- "9300:9300"
- "5044:5044"
volumes:
- /opt/ELK1/logstash/conf.d:/etc/logstash/conf.d
privileged: true
CONTAINER2:
version: '2'
services:
elasticsearch02:
image: sebp/elk:es241_l240_k461
ports:
- "5602:5601"
- "9201:9200"
- "9301:9300"
- "5045:5044"
volumes:
- /opt/ELK2/logstash/conf.d:/etc/logstash/conf.d
privileged: true
I'have configured the elasticsearch.yml inside the docker containers in this way:
NODE IN CONTAINER1:
cluster.name: elasticsearchcluster
node.name: node1
network.host: 0.0.0.0
network.bind_host: 0.0.0.0
discovery.zen.ping.unicast.hosts: ["127.0.0.1", "172.21.0.2"]
discovery.zen.minimum_master_nodes: 1
NODE IN CONTAINER2:
cluster.name: elasticsearchcluster
node.name: node2
network.host: 0.0.0.0
network.bind_host: 0.0.0.0
discovery.zen.ping.unicast.hosts: ["127.0.0.1", "172.22.0.2"]
discovery.zen.minimum_master_nodes: 1
The key is the discovery.zen.ping.unicast.hosts parameter: I don't have the real IP address, because it's a docker container.
I tried docker inspect elasticsearch01, I have the following "IPAddress" property:
"NetworkSettings": {
...
"Networks": {
"ELK1_default": {
...
"Gateway": "172.22.0.1",
"IPAddress": "172.22.0.2",
...
}
}
}
But it doesn't work if I set that IP address.
How to configure the cluster properly?
EDIT
Trying the host ip-address and the port, the node 1 starts, the node 2 fails with no errors.
discovery.zen.ping.unicast.hosts: ["127.0.0.1", "192.168.0.1:9300"] -> OK
discovery.zen.ping.unicast.hosts: ["127.0.0.1", "192.168.0.2:9300"] -> FAILS with no errors
Instead of using a prepared docker file with ELK stack, you could go for something like this:
version: '3'
services:
elasticsearch:
image: elasticsearch:2.4.1
ports:
- 9200:9200
networks:
- elk
elasticsearch_slave:
image: elasticsearch:2.4.1
networks:
- elk
depends_on:
- elasticsearch
command: elasticsearch --discovery.zen.ping.unicast.hosts=elasticsearch
logstash:
image: logstash:2.3.3
hostname: logstash
networks:
- elk
volumes:
- ./logstash.conf:/config/logstash.conf
depends_on:
- elasticsearch
ports:
- 5044:5044
command: logstash -f /config/logstash.conf
kibana:
image: kibana:4.5.1
hostname: kibana
networks:
- elk
depends_on:
- elasticsearch
- logstash
ports:
- 5601:5601
networks:
elk:
driver: bridge
Once you start your images using docker-compose up -d, then you can scale slaves using following command docker-compose scale elasticsearch_slave=5
Once that's done - you'll have 5 slaves + client node that opens up port 9200 as a gateway to the whole cluster.
For instance, after doing that, http://localhost:9200/_cat/nodes?v displays following:
Thanks to Evaldas Buinauskas, I've found the solution with the stack!
First, we need only one docker-compose.yml.
In that file we need to configure two services (one for each container to create), and a network to be shared between the two services.
This is the new docker-compose.yml:
version: '2'
services:
elk1:
environment:
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms2048m -Xmx2048m"
image: sebp/elk:es241_l240_k461
networks:
- elk_net
ports:
- "5601:5601"
- "9200:9200"
- "9300:9300"
- "5044:5044"
volumes:
- /opt/elk/logstash/conf.d:/etc/logstash/conf.d
privileged: true
elk2:
environment:
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms2048m -Xmx2048m"
image: sebp/elk:es241_l240_k461
networks:
- elk_net
ports:
- "5602:5601"
- "9201:9200"
- "9301:9300"
- "5045:5044"
depends_on:
- elk1
volumes:
- /opt/elk/logstash/conf.d:/etc/logstash/conf.d
privileged: true
networks:
elk_net:
driver: bridge
The command docker-compose up will create 3 elements:
the container elk1
the container elk2
the network elk_net
With the docker network inspect elk_net command we can view the (docker) ip addresses assigned to the 2 containers.
The elasticsearch.yml files have to be configured as follows:
cluster.name: elasticsearchcluster
node.name: node1
network.host: 0.0.0.0
network.bind_host: 0.0.0.0
network.publish_host: ${IP_ADDRESS_ELK1}
discovery.zen.ping.unicast.hosts: ["${IP_ADDRESS_ELK2}"]
discovery.zen.minimum_master_nodes: 1
cluster.name: elasticsearchcluster
node.name: node2
network.host: 0.0.0.0
network.bind_host: 0.0.0.0
network.publish_host: ${IP_ADDRESS_ELK2}
discovery.zen.ping.unicast.hosts: ["${IP_ADDRESS_ELK1}"]
discovery.zen.minimum_master_nodes: 1
With this configuration, the cluster works perfectly: the two nodes are merged correctly and the Http get to each elasticsearch server returns all the documents saved in the 2 nodes.

Docker scale and elasticsearch

The public docker image for elasticsearch is on docker hub
https://hub.docker.com/_/elasticsearch/
If i defined my own docker-compose file with elasticsearch, how would i scale up elasticsearch so that the ports don't collide?
version: '2'
services:
elasticsearch:
image: elasticsearch:latest
ports:
- "9200:9200"
- "9300:9300"
kibana:
image: kibana
ports:
- 5601:5601
environment:
- ELASTICSEARCH_URL=http://elasticsearch:9200
How could i scale this up, similar to the command below?
docker-compose scale elasticsearch=3
I am running docker beta for mac version 1.12.
Thanks,
Shane.
If you just want the ports don't collide, we should use "automatic port mapping" with docker-compose.yml as below:
version: '2'
services:
elasticsearch:
image: elasticsearch:latest
ports:
- 9200
- 9300
So Docker will automatically map ports 9200 and 9300 to random ports in range 32xxx.

Resources