I'm deploying in AWS Ubuntu instance, on a VM using this yml:
version: "3.7"
services:
cassandra:
container_name: cassandra
image: cassandra:3.11
restart: unless-stopped
hostname: cassandra
environment:
- MAX_HEAP_SIZE=1G
- HEAP_NEWSIZE=1G
- CASSANDRA_CLUSTER_NAME=thp
volumes:
- ./cassandra/data:/var/lib/cassandra/data
networks:
- Hive
elasticsearch:
container_name: elasticsearch
image: elasticsearch:7.11.1
environment:
- http.host=0.0.0.0
- discovery.type=single-node
- cluster.name=hive
- script.allowed_types= inline
- thread_pool.search.queue_size=100000
- thread_pool.write.queue_size=10000
- gateway.recover_after_nodes=1
- xpack.security.enabled=false
- bootstrap.memory_lock=true
- ES_JAVA_OPTS=-Xms256m -Xmx256m
ulimits:
nofile:
soft: 65536
hard: 65536
volumes:
- ./elasticsearch/data:/usr/share/elasticsearch/data
- ./elasticsearch/logs:/usr/share/elasticsearch/logs
networks:
- Hive
cortex:
container_name: cortex
image: thehiveproject/cortex:latest
depends_on:
- elasticsearch
environment:
- 'JOB_DIRECTORY=/opt/cortex/jobs'
ports:
- '0.0.0.0:9001:9001'
volumes:
- ./cortex/application.conf:/etc/cortex/application.conf
- '/var/run/docker.sock:/var/run/docker.sock'
- ./cortex/log/:/var/log/cortex
- /tmp:/tmp
#- ./cortex/Cortex-Analyzers:/opt/cortex/analyzers
#- .cortex/Cortex-Analyzers/analyzers.json:/opt/cortex/analyzers/analyzers.json
privileged: true
networks:
- Hive
thehive:
container_name: thehive
image: 'thehiveproject/thehive4:latest'
restart: unless-stopped
depends_on:
- cassandra
ports:
- '0.0.0.0:9000:9000'
volumes:
- ./thehive/application.conf:/etc/thehive/application.conf
- ./thehive/data:/opt/thp/thehive/data
- ./thehive/index:/opt/thp/thehive/index
command:
--cortex-port 9001
--cortex-keys ${CORTEX_KEY}
networks:
- Hive
networks:
Hive:
driver: bridge
and additional 2 yml application.conf files for thehive and cortex. The problem I have is that when I look up docker instances using docker ps or docker compose ps I can see that cortex and thehive are on 0.0.0.0:9000 and 0.0.0.0:9001 respectively but elasticsearch only shows 9200/tcp, 9300/tcp. How can I get access to web interface of ES locally? I can't figure this out. Using netstat I can't find port 9200 or 9300 listening anywhere.
Elasticsearch does not natively come with a web interface. Elasticsearch exposes a REST api where third party interfaces can interact with.
One of the most popular tools for visualizing or viewing data in the elastic stack is Kibana which interfaces with Elasticsearch. See link for more details: https://www.elastic.co/kibana/
ES API Reference: https://www.elastic.co/guide/en/elasticsearch/reference/current/rest-apis.html
I have this problem in the architecture:
kibana | {"type":"log","#timestamp":"2021-04-19T11:02:46+00:00","tags":["warning","elasticsearch"],"pid":7,"message":"Unable to revive connection: http://localhost:9200/"}
kibana | {"type":"log","#timestamp":"2021-04-19T11:02:46+00:00","tags":["warning","elasticsearch"],"pid":7,"message":"No living connections"}
kibana | {"type":"log","#timestamp":"2021-04-19T11:02:46+00:00","tags":["warning","plugins","licensing"],"pid":7,"message":"License information could not be obtained from Elasticsearch due to Error: No Living connections error"}
kibana | {"type":"log","#timestamp":"2021-04-19T11:03:16+00:00","tags":["warning","elasticsearch"],"pid":7,"message":"Unable to revive connection: http://localhost:9200/"}
kibana | {"type":"log","#timestamp":"2021-04-19T11:03:16+00:00","tags":["warning","elasticsearch"],"pid":7,"message":"No living connections"}
kibana | {"type":"log","#timestamp":"2021-04-19T11:03:16+00:00","tags":["warning","plugins","licensing"],"pid":7,"message":"License information could not be obtained from Elasticsearch due to Error: No Living connections error"}
I insert below the docker-compose from which I've not errors except for kibana, searching about the net I've seen the problem could be the memory requirment that I need to insert but if I insert deploy and then resource I've some problem issues related on docker
version: '2'
services:
zookeeper:
image: wurstmeister/zookeeper:3.4.6
ports:
- "2181:2181"
kafka:
build: .
ports:
- "9092:9092"
expose:
- "9093"
environment:
KAFKA_ADVERTISED_LISTENERS: INSIDE://kafka:9093,OUTSIDE://localhost:9092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT
KAFKA_LISTENERS: INSIDE://0.0.0.0:9093,OUTSIDE://0.0.0.0:9092
KAFKA_INTER_BROKER_LISTENER_NAME: INSIDE
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_CREATE_TOPICS: "prova1:1:1,stream:1,1,output:1,1,input:1,1"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
elasticsearch:
restart: always
image: docker.elastic.co/elasticsearch/elasticsearch-oss:7.4.0
container_name: elasticsearch
ports:
- 9200:9200
environment:
- discovery.type=single-node
- ES_JAVA_OPTS:"-Xms1g-Xmx1g"
jobmanager:
image: pyflink/playgrounds:1.10.0
volumes:
- ./examples:/opt/examples
hostname: "jobmanager"
expose:
- "6123"
ports:
- "8088:8088"
command: jobmanager
environment:
- JOB_MANAGER_RPC_ADDRESS=jobmanager
taskmanager:
image: pyflink/playgrounds:1.10.0
volumes:
- ./examples:/opt/examples
expose:
- "6121"
- "6122"
depends_on:
- jobmanager
command: taskmanager
links:
- jobmanager:jobmanager
environment:
- JOB_MANAGER_RPC_ADDRESS=jobmanager
kibana:
image: docker.elastic.co/kibana/kibana:7.12.0
container_name: kibana
restart: always
ports:
- 5601:5601
environment:
ELASTICSEARCH_URL: http://localhost:9200
ELASTICSEARCH_HOSTS: "http://localhost:9200"
elasticsearch.ssl.verificationMode: none
I manage to solve through the help of Elastic team. Here's a link that let to jump to community of elastic
The main purpose: I want to use Logstash for collecting logs files that rely on remote server.
My ELK stack were created by using docker-compose.yml
version: '3.3'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.5.1
ports:
- "9200:9200"
- "9300:9300"
volumes:
- '/share/elk/elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml:ro'
environment:
ES_JAVA_OPTS: "-Xmx512m -Xms256m"
ELASTIC_PASSWORD: changeme
discovery.type: single-node
networks:
- elk
deploy:
mode: replicated
replicas: 1
logstash:
image: docker.elastic.co/logstash/logstash:7.5.1
ports:
- "5000:5000"
- "9600:9600"
volumes:
- '/share/elk/logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml:ro'
- '/share/elk/logstash/pipeline/logstash.conf:/usr/share/logstash/pipeline/logstash.conf:ro'
environment:
LS_JAVA_OPTS: "-Xmx512m -Xms256m"
networks:
- elk
deploy:
mode: replicated
replicas: 1
kibana:
image: docker.elastic.co/kibana/kibana:7.5.1
ports:
- "5601:5601"
volumes:
- '/share/elk/kibana/config/kibana.yml:/usr/share/kibana/config/kibana.yml:ro'
networks:
- elk
deploy:
mode: replicated
replicas: 1
networks:
elk:
driver: overlay
and then I want to install a filebeat at the target host in order to send log to the ELK host.
docker run docker.elastic.co/beats/filebeat-oss:7.5.1 setup \
-E setup.kibana.host=x.x.x.x:5601 \
-E ELASTIC_PASSWORD="changeme" \
-E output.elasticsearch.hosts=["x.x.x.x:9200"]
but once hit the enter, the error occurs
Exiting: Couldn't connect to any of the configured Elasticsearch hosts. Errors: [Error connection to Elasticsearch http://x.x.x.x:9200: 401 Unauthorized: {"error":{"root_cause":[{"type":"security_exception","reason":"missing authentication credentials for REST request [/]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}}],"type":"security_exception","reason":"missing authentication credentials for REST request [/]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}},"status":401}]
Also tried with -E ELASTICS_USERNAME="elastic" the error still persists
You should disable the basic x-pack security which is by default enabled in Elasticsearch 7.X version, under environment variable of ES docker image, mentioned below and start ES docker container.
xpack.security.enabled : false
After this, no need to pass ES creds and you can also remove below from your ES env. var:
ELASTIC_PASSWORD: changeme
I'm trying to run Elasticsearch on an docker swarm. It works as a single node cluster for now, but only when the transport.host=localhost setting is included. Here is main part of docker-compose.yml:
version: "3"
services:
elasticsearch:
image: "elasticsearch:7.4.1" #(base version)
hostname: elasticsearch
ports:
- "9200:9200"
environment:
- cluster.name=elasticsearch
- bootstrap.memory_lock=true
- ES_JAVA_OPTS=-Xms512m -Xmx512m
- transport.host=localhost
volumes:
- "./elasticsearch/volumes:/usr/share/elasticsearch/data"
networks:
- logger_net
volumes:
logging:
networks:
logger_net:
external: true
Above configuration results in the yellow cluster state (because some indexes require additional replica).
Elasticsearch status page is unavailable when I'm using IP of the elasticsearch docker container in a transport.host setting or without a transport.host=localhost setting.
I think that using a transport.host=localhost setting is wrong. Is proper configuration of Elasticsearch in docker swarm available?
I'm trying to connect cadvisor to elasticsearch with docker and I'm getting the error:
cadvisor.go:113] Failed to initialize storage driver: failed to create the elasticsearch client - no Elasticsearch node available
docker-compose.yml
version: '2'
services:
elasticsearch:
image: "elasticsearch:2.3.3"
container_name: "elasticsearch"
ports:
- "9200:9200"
kibana:
image: "kibana:4.5.1"
container_name: "kibana"
ports:
- "5601:5601"
links:
- elasticsearch
cadvisor:
image: "google/cadvisor:latest"
container_name: "cadvisor"
ports:
- "8080:8080"
volumes:
- /:/rootfs:ro
- /var/run:/var/run:rw
- /sys:/sys:ro
- /var/lib/docker/:/var/lib/docker:ro
links:
- elasticsearch
restart: always
command: -storage_driver="elasticsearch" -storage_driver_es_host="http://elasticsearch:9200"
If I change the command to
command: -storage_driver="elasticsearch" -storage_driver_es_host="http://172.22.0.5:9200"
everything works just fine. Any ideas?
what you are missing is an index in elasticsearch, unfortunately this is not well documented
go to your kibana dashboard, dev tools and send this request:
PUT /.kibana/index-pattern/cadvisor
{"title" : "cadvisor", "timeFieldName": "container_stats.timestamp"}