How to run container of beat that required authentication from Elasticsearch - elasticsearch

The main purpose: I want to use Logstash for collecting logs files that rely on remote server.
My ELK stack were created by using docker-compose.yml
version: '3.3'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.5.1
ports:
- "9200:9200"
- "9300:9300"
volumes:
- '/share/elk/elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml:ro'
environment:
ES_JAVA_OPTS: "-Xmx512m -Xms256m"
ELASTIC_PASSWORD: changeme
discovery.type: single-node
networks:
- elk
deploy:
mode: replicated
replicas: 1
logstash:
image: docker.elastic.co/logstash/logstash:7.5.1
ports:
- "5000:5000"
- "9600:9600"
volumes:
- '/share/elk/logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml:ro'
- '/share/elk/logstash/pipeline/logstash.conf:/usr/share/logstash/pipeline/logstash.conf:ro'
environment:
LS_JAVA_OPTS: "-Xmx512m -Xms256m"
networks:
- elk
deploy:
mode: replicated
replicas: 1
kibana:
image: docker.elastic.co/kibana/kibana:7.5.1
ports:
- "5601:5601"
volumes:
- '/share/elk/kibana/config/kibana.yml:/usr/share/kibana/config/kibana.yml:ro'
networks:
- elk
deploy:
mode: replicated
replicas: 1
networks:
elk:
driver: overlay
and then I want to install a filebeat at the target host in order to send log to the ELK host.
docker run docker.elastic.co/beats/filebeat-oss:7.5.1 setup \
-E setup.kibana.host=x.x.x.x:5601 \
-E ELASTIC_PASSWORD="changeme" \
-E output.elasticsearch.hosts=["x.x.x.x:9200"]
but once hit the enter, the error occurs
Exiting: Couldn't connect to any of the configured Elasticsearch hosts. Errors: [Error connection to Elasticsearch http://x.x.x.x:9200: 401 Unauthorized: {"error":{"root_cause":[{"type":"security_exception","reason":"missing authentication credentials for REST request [/]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}}],"type":"security_exception","reason":"missing authentication credentials for REST request [/]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}},"status":401}]
Also tried with -E ELASTICS_USERNAME="elastic" the error still persists

You should disable the basic x-pack security which is by default enabled in Elasticsearch 7.X version, under environment variable of ES docker image, mentioned below and start ES docker container.
xpack.security.enabled : false
After this, no need to pass ES creds and you can also remove below from your ES env. var:
ELASTIC_PASSWORD: changeme

Related

Docker-Compose - TheHive, Cortex, Elasticsearch using Cassandra - question regarding ES localhost listening

I'm deploying in AWS Ubuntu instance, on a VM using this yml:
version: "3.7"
services:
cassandra:
container_name: cassandra
image: cassandra:3.11
restart: unless-stopped
hostname: cassandra
environment:
- MAX_HEAP_SIZE=1G
- HEAP_NEWSIZE=1G
- CASSANDRA_CLUSTER_NAME=thp
volumes:
- ./cassandra/data:/var/lib/cassandra/data
networks:
- Hive
elasticsearch:
container_name: elasticsearch
image: elasticsearch:7.11.1
environment:
- http.host=0.0.0.0
- discovery.type=single-node
- cluster.name=hive
- script.allowed_types= inline
- thread_pool.search.queue_size=100000
- thread_pool.write.queue_size=10000
- gateway.recover_after_nodes=1
- xpack.security.enabled=false
- bootstrap.memory_lock=true
- ES_JAVA_OPTS=-Xms256m -Xmx256m
ulimits:
nofile:
soft: 65536
hard: 65536
volumes:
- ./elasticsearch/data:/usr/share/elasticsearch/data
- ./elasticsearch/logs:/usr/share/elasticsearch/logs
networks:
- Hive
cortex:
container_name: cortex
image: thehiveproject/cortex:latest
depends_on:
- elasticsearch
environment:
- 'JOB_DIRECTORY=/opt/cortex/jobs'
ports:
- '0.0.0.0:9001:9001'
volumes:
- ./cortex/application.conf:/etc/cortex/application.conf
- '/var/run/docker.sock:/var/run/docker.sock'
- ./cortex/log/:/var/log/cortex
- /tmp:/tmp
#- ./cortex/Cortex-Analyzers:/opt/cortex/analyzers
#- .cortex/Cortex-Analyzers/analyzers.json:/opt/cortex/analyzers/analyzers.json
privileged: true
networks:
- Hive
thehive:
container_name: thehive
image: 'thehiveproject/thehive4:latest'
restart: unless-stopped
depends_on:
- cassandra
ports:
- '0.0.0.0:9000:9000'
volumes:
- ./thehive/application.conf:/etc/thehive/application.conf
- ./thehive/data:/opt/thp/thehive/data
- ./thehive/index:/opt/thp/thehive/index
command:
--cortex-port 9001
--cortex-keys ${CORTEX_KEY}
networks:
- Hive
networks:
Hive:
driver: bridge
and additional 2 yml application.conf files for thehive and cortex. The problem I have is that when I look up docker instances using docker ps or docker compose ps I can see that cortex and thehive are on 0.0.0.0:9000 and 0.0.0.0:9001 respectively but elasticsearch only shows 9200/tcp, 9300/tcp. How can I get access to web interface of ES locally? I can't figure this out. Using netstat I can't find port 9200 or 9300 listening anywhere.
Elasticsearch does not natively come with a web interface. Elasticsearch exposes a REST api where third party interfaces can interact with.
One of the most popular tools for visualizing or viewing data in the elastic stack is Kibana which interfaces with Elasticsearch. See link for more details: https://www.elastic.co/kibana/
ES API Reference: https://www.elastic.co/guide/en/elasticsearch/reference/current/rest-apis.html

How to communicate between two services in Fargate using docker compose

I am trying to host Elasticsearch and kibana in AWS ECS (Fargate). I have created a docker-compose.ym file
version: '2.2'
services:
es-node:
image: docker.elastic.co/elasticsearch/elasticsearch:7.9.0
deploy:
resources:
limits:
memory: 8Gb
command: >
bash -c
'bin/elasticsearch-plugin install analysis-smartcn https://github.com/medcl/elasticsearch-analysis-stconvert/releases/download/v7.9.0/elasticsearch-analysis-stconvert-7.9.0.zip;
/usr/local/bin/docker-entrypoint.sh'
container_name: es-$ENV
environment:
- node.name=es-$ENV
- cluster.name=es-docker-cluster
- discovery.type=single-node
# - discovery.seed_hosts=es02,es03
# - cluster.initial_master_nodes=es01,es02,es03
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- ELASTIC_PASSWORD=$ES_DB_PASSWORD
- xpack.security.enabled=true
logging:
driver: awslogs
options:
awslogs-group: we-two-works-db-ecs-context
awslogs-region: us-east-1
awslogs-stream-prefix: es-node
volumes:
- elastic_data:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- elastic
kibana-node:
image: docker.elastic.co/kibana/kibana:7.9.0
container_name: kibana-$ENV
ports:
- 5601:5601
environment:
ELASTICSEARCH_URL: $ES_DB_URL
ELASTICSEARCH_HOSTS: '["http://es-$ENV:9200"]'
ELASTICSEARCH_USERNAME: elastic
ELASTICSEARCH_PASSWORD: $ES_DB_PASSWORD
networks:
- elastic
logging:
options:
awslogs-group: we-two-works-db-ecs-context
awslogs-region: us-east-1
awslogs-stream-prefix: "kibana-node"
volumes:
elastic_data:
driver_opts:
performance-mode: maxIO
throughput-mode: bursting
uid: 0
gid: 0
networks:
elastic:
driver: bridge
and pass in the env variables using .env.developmentfile
ENV="development"
ES_DB_URL="localhost"
ES_DB_PORT=9200
ES_DB_USER="elastic"
ES_DB_PASSWORD="****"
and up the stack in ECS using this command after creating a docker context pointing to ECS docker compose --env-file ./.env.development up
However, after creating the stack the kibana node fails to establish communication with the elasticsearch node. Check the logs from kibana node container
{
"type": "log",
"#timestamp": "2021-12-09T02:07:04Z",
"tags": [
"warning",
"plugins-discovery"
],
"pid": 7,
"message": "Expect plugin \"id\" in camelCase, but found: beats_management"
}
{
"type": "log",
"#timestamp": "2021-12-09T02:07:04Z",
"tags": [
"warning",
"plugins-discovery"
],
"pid": 7,
"message": "Expect plugin \"id\" in camelCase, but found: triggers_actions_ui"
}
[BABEL] Note: The code generator has deoptimised the styling of /usr/share/kibana/x-pack/plugins/canvas/server/templates/pitch_presentation.js as it exceeds the max of 500KB.
After doing a research I have found that ecs cli does not support service.networks docker compose file field and it has given these instructions Communication between services is implemented by SecurityGroups within the application VPC.. I am wondering how to set these instructions in the docker-compose.yml file because the IP addresses get assigned after stack is being created.
These containers should be able to communicate with each others via their compose service names. So for example the kibana container should be able to reach the ES node using es-node. I assume this needs you need to set ELASTICSEARCH_HOSTS: '["http://es-node:9200"]'?
I am also not sure about ELASTICSEARCH_URL: $ES_DB_URL. I see you set ES_DB_URL="localhost" but that means that the kibana container will be calling localhost to try to reach the ES service (this may work on a laptop where all containers run on a flat network but that's not how it will work on ECS - where each compose service is a separate ECS service).
[UPDATE]
I took at stab at the compose file provided. Note that I have simplified it a bit to remove some variables such as the env file, the logging entries (why did you need them? Compose/ECS will create the logging infra for you).
This file works for me (with gotchas - see below):
services:
es-node:
image: docker.elastic.co/elasticsearch/elasticsearch:7.9.0
deploy:
resources:
reservations:
memory: 8Gb
command: >
bash -c
'bin/elasticsearch-plugin install analysis-smartcn https://github.com/medcl/elasticsearch-analysis-stconvert/releases/download/v7.9.0/elasticsearch-analysis-stconvert-7.9.0.zip;
/usr/local/bin/docker-entrypoint.sh'
container_name: es-node
environment:
- node.name=es-node
- cluster.name=es-docker-cluster
- discovery.type=single-node
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- ELASTIC_PASSWORD=thisisawesome
- xpack.security.enabled=true
volumes:
- elastic_data:/usr/share/elasticsearch/data
ports:
- 9200:9200
kibana-node:
image: docker.elastic.co/kibana/kibana:7.9.0
deploy:
resources:
reservations:
memory: 8Gb
container_name: kibana-node
ports:
- 5601:5601
environment:
ELASTICSEARCH_URL: es-node
ELASTICSEARCH_HOSTS: http://es-node:9200
ELASTICSEARCH_USERNAME: elastic
ELASTICSEARCH_PASSWORD: thisisawesome
volumes:
elastic_data:
driver_opts:
performance-mode: maxIO
throughput-mode: bursting
uid: 0
gid: 0
There are two major things I had to fix:
1- the kibana task needed more horsepower (the 0.5 vCPU and 512MB of memory - default - was not enough). I set the memory to 8GB (which set the CPU to 1) and the Kibana container came up.
2- I had to increase ulimits for the ES container. Some of the error messages in the logs pointed to max file opened and vm.max_map_count which both pointed to ulimits needing being adjusted. For Fargate you need a special section in the task definition. I know there is a way to embed CFN code into the compose file via overlays but I found easier/quickert to docker compose convert the compose into a CFN file and tweak that by adding this section right below the image:
"ulimits": [
{
"name": "nofile",
"softLimit": 65535,
"hardLimit": 65535
}
]
So to recap, you'd need to take my compose above, convert it into a CFN file, add the ulimits snipped and run it directly in CFN.
You can work backwards from here to re-add your variables etc.
HTH

BadResponseCodeError, :error=>"Got response code '401' contacting Elasticsearch at URL

I use logstash by logstash:7.9.1 image and i get this error when I up docker-compose and I dont know what to do with this (I try to make my logstash config wrong and connect it to the wrong elastic port but my docker still connect to 9200 and so I think it dosent read its data from my logstash config) pls help meeeee!!!!
my error:
[logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elasticsearch:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :error=>"Got response code '401' contacting Elasticsearch at URL 'http://elasticsearch:9200/'"}
my docker-compose:
zookeeper:
image: wurstmeister/zookeeper:3.4.6
container_name: zookeeper
ports:
- 2181:2181
networks:
- bardz
kafka:
image: wurstmeister/kafka:2.11-1.1.0
container_name: kafka
depends_on:
- zookeeper
environment:
KAFKA_ADVERTISED_HOST_NAME: kafka
KAFKA_CREATE_TOPICS: logs-topic:1:1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
ports:
- 9092:9092
volumes:
- kofka-volume:/var/run/docker.sock
networks:
- bardz
elasticsearch:
build:
context: elk/elasticsearch/
args:
ELK_VERSION: "7.9.1"
volumes:
- type: bind
source: ./elk/elasticsearch/config/elasticsearch.yml
target: /usr/share/elasticsearch/config/elasticsearch.yml
read_only: true
- type: volume
source: elasticsearch
target: /usr/share/elasticsearch/data
ports:
- "9200:9200"
- "9300:9300"
environment:
ES_JAVA_OPTS: "-Xmx256m -Xms256m"
ELASTIC_PASSWORD: changeme
# Use single node discovery in order to disable production mode and avoid bootstrap checks
# see https://www.elastic.co/guide/en/elasticsearch/reference/current/bootstrap-checks.html
discovery.type: single-node
networks:
- bardz
logstash:
image: logstash:7.9.1
restart: on-failure
ports:
- "5000:5000/tcp"
- "5000:5000/udp"
- "9600:9600"
volumes:
- logstash_data:/bitnami
- ./elk/logstash/logstash-kafka.conf:/opt/bitnami/logstash/config/logstash-kafka.conf
environment:
LOGSTASH_CONF_FILENAME: logstash-kafka.conf
networks:
- bardz
depends_on:
- elasticsearch
networks:
bardz:
external: true
driver: bridge
volumes:
elasticsearch:
zipkin-volume:
kofka-volume:
logstash_data:
my logstash config:
input {
kafka {
bootstrap_servers => "kafka:9092"
topics => ["logs-topic"]
}
}
output {
elasticsearch {
hosts => ["elasticsearch:9200"]
user => elastic
password => changeme
index => "logs-topic"
workers => 1
}
}
You are using the wrong password of elastic user in 7.9 which is changed from changeme to password as shown in ES contribution doc, but I tried and this seems to work only when you are running ES from source code.
Anyway you are getting 401 means unauth access and you can read more about it here,
As you are not running ES code from source, would advise you to follow the steps mentioned in this thread to change the password and as you are running it in docker, you need to go inside the docker conatainer by docker exec -it <cont-id> /bin/bash and than run the command mentioned in thread to set your own password.

Why elasticsearch on docker swarm requires a transport.host=localhost setting?

I'm trying to run Elasticsearch on an docker swarm. It works as a single node cluster for now, but only when the transport.host=localhost setting is included. Here is main part of docker-compose.yml:
version: "3"
services:
elasticsearch:
image: "elasticsearch:7.4.1" #(base version)
hostname: elasticsearch
ports:
- "9200:9200"
environment:
- cluster.name=elasticsearch
- bootstrap.memory_lock=true
- ES_JAVA_OPTS=-Xms512m -Xmx512m
- transport.host=localhost
volumes:
- "./elasticsearch/volumes:/usr/share/elasticsearch/data"
networks:
- logger_net
volumes:
logging:
networks:
logger_net:
external: true
Above configuration results in the yellow cluster state (because some indexes require additional replica).
Elasticsearch status page is unavailable when I'm using IP of the elasticsearch docker container in a transport.host setting or without a transport.host=localhost setting.
I think that using a transport.host=localhost setting is wrong. Is proper configuration of Elasticsearch in docker swarm available?

cadvisor, elasticsearch, docker: no Elasticsearch node available

I'm trying to connect cadvisor to elasticsearch with docker and I'm getting the error:
cadvisor.go:113] Failed to initialize storage driver: failed to create the elasticsearch client - no Elasticsearch node available
docker-compose.yml
version: '2'
services:
elasticsearch:
image: "elasticsearch:2.3.3"
container_name: "elasticsearch"
ports:
- "9200:9200"
kibana:
image: "kibana:4.5.1"
container_name: "kibana"
ports:
- "5601:5601"
links:
- elasticsearch
cadvisor:
image: "google/cadvisor:latest"
container_name: "cadvisor"
ports:
- "8080:8080"
volumes:
- /:/rootfs:ro
- /var/run:/var/run:rw
- /sys:/sys:ro
- /var/lib/docker/:/var/lib/docker:ro
links:
- elasticsearch
restart: always
command: -storage_driver="elasticsearch" -storage_driver_es_host="http://elasticsearch:9200"
If I change the command to
command: -storage_driver="elasticsearch" -storage_driver_es_host="http://172.22.0.5:9200"
everything works just fine. Any ideas?
what you are missing is an index in elasticsearch, unfortunately this is not well documented
go to your kibana dashboard, dev tools and send this request:
PUT /.kibana/index-pattern/cadvisor
{"title" : "cadvisor", "timeFieldName": "container_stats.timestamp"}

Resources